February 26, 2010
Forwarding Defamatory Email with Introductory Comments Protected by 47 USC 230--Phan v. Pham
By Eric Goldman
Phan v. Pham, 2010 WL 658244 (Cal. App. Ct. Feb. 25, 2010)
This is the first 230 case I'm blogging about in 2010 (see my 2009 recap), and what a nice ruling to start the year. The facts are crisp and clean: An email author wrote an allegedly defamatory email about plaintiff Phan and sent it to a group of recipients, including defendant Pham. Pham then forwarded the email with the following additional comments to at least one recipient:
“Everything will come out to the daylight, I invite you and our classmates to read the following comments of Senior Duc (Duc Xuan Nguyen) President of the Federation of Associations of the Republic of Vietnam Navy and Merchant Marine.”
[Note: the relevant emails were originally in Vietnamese, but no one contested the English translations]
As we know, 47 USC 230 divides the universe of content into two buckets: first party content and third party content. In some cases, like Roommates.com or Mazur, the division between the two buckets is very murky. But in this fact pattern, Pham's email can be cleanly zoned into first party and third party content, which makes 230's application pretty simple. Per the 2006 California Supreme Court ruling in Barrett v. Rosenthal, California law is unambiguous that Pham isn't liable for the contents of the forwarded email.
But what about the fact that Pham chose to forward the email? In the post-Roommates.com era, I routinely encounter misguided arguments that an online actor's affirmative decision to republish third party content strips away 47 USC 230 protection. Roommates.com even has some ambiguous language suggesting this outcome. (The court is so baffled by the Roommates.com opinion that it simply quotes multiple paragraphs in a footnote, as if we will be able to divine some coherence from the language that escaped this judicial panel).
However, that's never been the law. See, e.g., Barrett v. Rosenthal, Batzel v. Smith, D'Alonzo v. Truscello and the many Ripoff Report cases; but see the goofy Woodhull case). As this case illustrates, nothing on this point has changed in the post-Roommates.com era. Instead, the court believes (as I do) that the Roommates.com opinion turned on the fact that Roommates.com "created a website 'designed to solicit and enforce housing preferences that are alleged to be illegal.'” Without the mandatory illegality, 230(c)(1) protects an editorial decision to publish third party content just as much as it protects the decision not to. The court expressly rejects any implication that Roommates.com trumped Barrett v. Rosenthal or California law on that point.
In a footnote, the court says it is expressly not addressing the Moreno concurrence in Barrett v. Rosenthal that an alleged conspiracy between content originator and content republisher would trump 230.
For those of you keeping score, this is a rare case where Roommates.com was distinguished rather than cited in support of the defense. But in the end, the defense still easily won the 47 USC 230 ruling.
With 230 resolved in Pham's favor, the only remaining question is Pham's liability for his first party content--the email introduction he wrote. There's no question that Pham can be liable for his own words (see, e.g., the uncited Tefft case), but in a brief and breezy opinion, the court says that Pham didn't say anything impermissible. The only "substance" to Pham's introduction is a vague assertion to the effect that the truth will come out. That's not defamatory, even if it implicitly suggests that the forwarded email may help enlighten the truth.
Rare Ruling on Damages for Sending Bogus Copyright Takedown Notice--Lenz v. Universal
By Eric Goldman
Lenz v. Universal Music Corp., 5:07-cv-03783-JF (N.D. Cal. Feb. 25. 2010)
In the lawsuit over the allegedly bogus takedown of a YouTube video of a baby dancing to Prince's "Let's Go Crazy" (previous blog coverage), Judge Fogel has defined some standards for computing damages in a 17 USC 512(f) case, which creates a cause of action for sending certain types of bogus copyright takedown notices. I can't recall another case discussing the damages requirements of a 512(f) claim--the only other definitive 512(f) plaintiff's win was Online Policy Group v. Diebold (also before Judge Fogel), which settled for $125k before Judge Fogel reached damages. As a result, I believe this is a novel ruling which could have significant implications for future 512(f) cases.
512(f) says that the sender of an impermissible takedown notice "shall be liable for any damages, including costs and attorneys' fees, incurred [by the 512(f) plaintiff] as the result of the service provider relying upon such misrepresentation..." Judge Fogel interprets the language to mean that "a §512(f) plaintiff’s damages must be proximately caused by the misrepresentation to the service provider and the service provider’s reliance on the misrepresentation." Accordingly, 512(f) does not require plaintiffs to show that they suffered economic losses.
At the same time, the judge says that the statute does not include all damages that occurred "but for" the bogus takedown notice. Specifically, the attorneys' fees and costs associated with actually bringing a 512(f) claim (as opposed to getting legal guidance to file a 512(g) counternotice) are not covered by 512(f). Instead, "while any fees incurred for work in responding to the takedown notice and prior to the institution of suit under § 512(f) are recoverable under that provision, recovery of any other costs and fees is governed by § 505." Section 505 says that a court may award costs and reasonable attorneys' fees to the prevailing party at its discretion. By definition, a 512(f) violation occurs only when the takedown notice sender has made a knowing and material misrepresentation, so most judges probably will exercise their discretion favorably towards a winning 512(f) plaintiff, but it's not an automatic award. Furthermore, by implication, losing 512(f) plaintiffs could be ordered to pay the defendants’ costs and attorneys’ fees.
Overall, I think the legal rules outlined in this case are more favorable to 512(f) plaintiffs than not, but they also remind us how 512(f)'s utility may be limited in practice. Not only must the 512(f) plaintiff overcome the Rossi case, which effectively mooted claims for erroneous takedown notices, but this ruling illustrates how hard 512(f) plaintiffs have to work to find compensable damages. We don't see many 512(f) cases being brought. Watching this case, it's easy to see why.
In the rest of the opinion, Judge Fogel rejects a number of other affirmative defenses advanced by Universal, including withering attacks that Lenz had bad faith and unclean hands.
February 22, 2010
Google AdWords Contract Upheld Again, Causing a Venue Transfer in Flowbee v. Google
By Eric Goldman
Google has successfully transferred Flowbee v. Google from Texas to California by invoking the venue selection clause in its AdWords contract. The result is noteworthy for three reasons.
Second, the court said the scope of the venue selection clause covered Flowbee’s tort claim for trademark infringement. This was not a guaranteed conclusion. Clearly, the clause governs Flowbee’s AdWords purchases, but it does not automatically govern Flowbee’s beefs with other advertisers’ AdWords purchases under their independent contracts with Google. Compare Miller v. Facebook with Yahoo v. American Airlines. This court, bound by the Fifth Circuit ruling in the Yahoo case, nevertheless distinguished it because the Yahoo clause specified that business partners “submit” to exclusive jurisdiction, while the Google clause used the words “litigated exclusively.”
Third, Google surely is happy to have this case out of a lousy venue (Southern District of Texas) and into its home court. If only it could transfer the other AdWords cases too!
Having successfully transferred the case, Google then filed its answer and a counterclaim that Flowbee breached the mandatory venue clause by suing Google in Southern District of Texas. This is at least the second time Google has tried this type of contract breach claim (I blogged on the same tactic in the John Beck Amazing Profits case), so Google’s turn-the-tables move isn’t really news even though it prompted a TechCrunch post.
However, Google’s contract breach counterclaim highlights how Google got caught in flagrante delicto in its collections suit against myTriggers by bringing that suit in Ohio state court. In Flowbee, Google takes the position that bringing a suit in the wrong venue is an actionable contract breach. I’m not exactly sure how Google would respond if myTriggers countersued Google for breaching its AdWords contract by suing myTriggers in Ohio rather than California. I don’t think myTriggers could claim any damages from Google’s breach, nor do I expect myTriggers would complain about the breach because it’s probably thrilled to have trapped Google in an undesirable forum. However, Google is walking an arguably duplicitous line.
As TechCrunch post points out, Google’s answer contains some “whoops” references to Rosetta Stone instead of Flowbee. By inference, Google probably cloned-and-revised its Rosetta Stone answer for the Flowbee litigation. My tip for clean living: Whenever I clone-and-revise a document to use for a different party, the very first thing I do is a global search-and-replace to change party names.
The roster of pending AdWords cases (I most recently double-checked the status of these cases on February 20, 2010):
* Ezzo v. Google
* Rescuecom v. Google
* FPX v. Google
* John Beck Amazing Profits v. Google
and the companion Google v. John Beck Amazing Profits
* Stratton Faxon v. Google (not initially a trademark case). Check the status.
Soaring Helmet v. Bill Me
Ascentive v. Google
Jurin v. Google 1.0 (voluntarily dismissed), succeeded by Jurin v. Google 2.0
* Rosetta Stone v. Google
* Flowbee v. Google
* Parts Geek v. US Auto Parts
* Dazzlesmile v. Epic
February 19, 2010
Clickthrough Agreement With Acknowledgement Checkbox Enforced--Scherillo v. Dun & Bradstreet
By Eric Goldman
Scherillo v. Dun & Bradstreet, Inc., 2010 WL 537805 (E.D.N.Y. Feb. 17, 2010)
I teach my Cyberspace Law students that the most effective online contract formation process is a "mandatory non-leaky clickthrough agreement":
* mandatory = the user cannot proceed to the destination without going through a screen soliciting their consent to the user agreement.
* non-leaky = there are no alternative ways the user can reach the destination. I realize this is redundant with "mandatory," but I remind students that a seemingly mandatory process can have leaks. For example, if customer support representatives will manually set up user accounts occasionally, the mandatory online process has become leaky because now a few users reached the destination without consenting to the agreement.
* clickthrough = the user manifests assent to the contract by clicking, and the user is told that the click signifies assent.
There are other ways to form online contracts (e.g., email exchanges), but if executed properly, the mandatory non-leaky clickthrough process should do very well against contract formation challenges. But even this description leaves open a number of user interaction judgments. Does likelihood of contract formation vary if:
* the agreement terms are presented on the clickthrough page itself or are only available for review by hyperlink?
* the agreement terms are presented in a scrollbox? If a scrollbox is used, must the user be forced to scroll through the scrollbox?
* the user is asked to check an additional box, such as a certification that the user has read the agreement?
In all of these cases, I believe the contract should be properly formed whether the answer to these questions is yes or no. However, I'm now a fan of adding a bonus mandatory checkbox as part of the formation process after reading today's opinion. A user mounts a sophisticated challenge to a mandatory non-leaky clickthrough process, and the bonus mandatory checkbox helps squelch the challenge. I think the court would have enforced it without the checkbox, but it sure put the user in an awkward/untenable position.
Scherillo bought a financial report about a company from Dun & Bradstreet's Small Business Solutions website. Scherillo alleges that the report painted an overly rosy picture of the company, leading him to make bad investment decisions that cost him money when the company tanked. Scherillo wants D&B to cover his investment losses.
Scherillo is almost certain to lose on the merits. Indeed, this case brought to mind one of the earliest cyberlaw cases, Daniel v. Dow Jones, 520 N.Y.S. 2d 334 (N.Y.C. Civ. Ct. Spec. Term 1987). (This case is a fun read--see how the court discusses electronic networked communications almost a quarter-century ago). That case involved Dow Jones' publication of an ambiguous report via a dial-up online service that led the plaintiff to make a bad investment decision. The court said that any tort claim for publishing inaccurate information required the plaintiff to show that it had a "special relationship" (analogous to a fiduciary relationship) with the information vendor, and an ordinary customer-vendor relationship did not qualify as a special relationship.
Interestingly, D&B would rather hear the case in NJ rather than keep it in NY and hope to benefit from substantive NY law that surely would doom Scherillo's case. (Perhaps NJ has a similar law). To move the case to NJ, D&B invoked the venue selection clause in its user agreement. Let's look at the online contract formation process. The court says:
"since 2007, the SBS website has included a page that requires users to register before purchasing a Dun and Bradstreet product ("the registration page"). On the registration page, users input information, including their e-mail address and name. The bottom quarter to third of the page contains a scrollable text box with the title "Terms and Conditions" [which contained a mandatory venue selection clause designating NJ]. Directly below this text box there is more text that reads: "I have read and AGREE to the terms and conditions shown above." Immediately adjacent to this text is a much smaller, empty box ("the terms and conditions check box"). Also at the bottom of the page is another box containing the phrase "Complete Registration" ("the Complete Registration box"). Clicking on this box completes the user's registration. McDonald testified that if a user clicks on the Complete Registration box without checking the terms and conditions check box, the user is unable to complete registration and is returned to the registration page."
Check out the page yourself as I saw it in Google Chrome on Feb. 18 (with cropping). The formation process looks pretty standard to me.
Scherillo attacked the formation process by saying he never consented to the agreement because "it was possible for him to unknowingly and involuntarily 'check' the terms and conditions check box." Not only that, he lined up Sean Chumura, "a cyberwarfare and computer forensics expert" who is also [LINK NSFW] helping Perfect 10 in its lawsuit against Google, to testify that "it was possible for plaintiff, while 'tabbing' through the registration page, to inadvertently hit the space bar and thereby 'check' the terms and conditions box."
[Snarky paragraph alert] First, this may prove the adage that you can find an expert to testify about ANYTHING. Second, Scherillo alleged $75k of investment losses. For a low-value lawsuit like that, he needs a cyberwarfare expert??? Third, I believe Chumura has a MySpace page. Really...? I wonder if he uses an AOL.com email address too. The MySpace page also reveals that its author appeared to attend the Dan Quayle school of spelling.
OK, back to the case. The judge was no more tolerant of this nonsense than I am. He resolves the factual dispute by saying:
even under plaintiff's theory--that, while "tabbing" through the fields on the registration page, he accidentally hit the space bar key and thereby "checked" the terms and conditions box--plaintiff would have seen the check mark appear in the box and then still would have had to hit the "return" key (or clicked the "complete registration" box with the mouse) to complete the registration and advance to the next screen. Plaintiff would have had an opportunity to see that he checked the box inadvertently before he then hit the return key on the "complete registration" box. Thus, to accept plaintiff's theory, the Court would have to find that plaintiff hit two keys accidentally-the space bar and the return key-and that he was then involuntarily and unexpectedly sent to the next screen where he nonetheless proceeded to enter his credit card information and complete the purchase of the report. This alleged chain of events is simply not credible.
Therefore, Scherillo's click on the "Complete Registration" box manifested Scherillo's assent to the terms, even if Scherillo chose not to review them. The court says that the fact that the terms were in a scrollbox is immaterial, and the fact that some sites require the user to scroll through the scrollbox before proceeding doesn't affect the effectiveness of D&B's implementation.
I believe this court would have upheld the formation process even without the bonus checkbox, but you can see how the checkbox defused the withering assault of a cyberwarfare expert. Thus, you might consider implementing the bonus checkbox to discourage similar silly attacks against your contract formation process in the future.
February 18, 2010
Google Hit With Another Antitrust Lawsuit. Does It Have Microsoft Ties? Google v. myTriggers
By Eric Goldman
Google, Inc v. myTriggers.com, Inc., 09 CV 14836 (Franklin County Ct. of Common Pleas, Ohio). Google filed its first amended complaint on January 20, 2010. myTriggers filed an answer with counterclaims on February 2, 2010. See both pleadings here.
About a year ago, Google was sued by an obscure company called TradeComet for various antitrust violations. TradeComet’s lawsuit wasn’t the first private antitrust claim against Google; other scrappy claims had arisen over the years. However, none of them were serious challenges or went anywhere.
The TradeComet complaint looked equally low-merit, so I would have probably ignored it except that TradeComet’s counsel was the NYC-based Cadwalader Wickersham & Taft—which also does antitrust work for Microsoft. Those two facts could be completely unrelated, but it’s possible that they aren’t. First, I can’t imagine Cadwalader would jeopardize a lucrative relationship with a Fortune 50 company to take on a low-merit lawsuit for a no-name company. Therefore, either Cadwalader viewed the various antitrust issues as so unrelated that no legal or business conflicts were created—a risky judgment given Microsoft’s sensitivity about both Google and antitrust—or Microsoft approved/acquiesced to Cadwalader’s representation of TradeComet. Second, Microsoft has been engaged in a multi-year, multi-front effort to harass Google on antitrust issues, and Cadwalader’s involvement would be consistent with that campaign.
I checked PACER today, and the TradeComet lawsuit is going nowhere fast. Google filed a motion to dismiss in March 2009, prompting an exchange of memos in April, and the last (non-substantive) PACER entry was in August. It appears that everyone is waiting for the judge to rule on Google’s motion to dismiss from almost a year ago.
Meanwhile, an intriguing and unexpected new antitrust battlefront has opened up. myTriggers is run by NexTag alums and, IMO, has an even worse brand name than TradeComet. Do a search for “mytriggers” and see if you can avoid juvenile giggles. myTriggers’ properties include three shopbots/shopping comparison websites: mytriggers.com, comparisonsearches.com and shopbig.com. A quick review of these websites revealed no obvious reason why I would choose them over better-known shopbots.
As a result, I could see why these sites might have trouble generating organic traffic. Therefore, I wasn’t surprised to learn that myTriggers was a heavy AdWords advertiser. myTriggers said that Google had extended it a $250,000 line of credit, and it incurred $200k in AdWords charges for December 2007. Meanwhile, myTriggers clearly enjoyed its acquired traffic. It did three multi-million rounds of financing (the 2005 initial capitalization and rounds in 2006 and 2007) and leased a big datacenter in March 2008. But just as it was finalizing the lease, the bottom dropped out on myTriggers. Google implemented a change to myTriggers’ quality score, which (according to myTriggers) boosted its minimum bids 10-100X. It’s never good for margins when the price on a key input goes up 10X or more, and myTriggers subsequently fired almost all of its employees and effectively exited the market.
This left the pesky matter of Google’s unpaid bill—to the tune of about $335k, according to Google. I’m not sure how many of myTriggers’ millions haven’t been spent, but clearly Google thought it was worth chasing this money. It hired a (presumably cheap) local Ohio counsel and filed one of the shortest complaints I can recall seeing (2 sentences!) in myTriggers’ local state courthouse—even though the AdWords contract appears to say that contract-related litigation *must* be adjudicated in Google’s home court. Other than the risk that myTriggers didn’t have the money, Google appears to have treated this as a routine and unexciting collections matter.
This is where the story gets weird. Rather than plead poverty to Google or mount a generic but low-cost contract defense, myTriggers retained THREE law firms and also brought a counterclaim under Ohio’s state antitrust law, the Valentine Act. The three law firms are: (1) local Columbus counsel, presumably initially retained to handle the collections matter, (2) a Cincinnati firm that includes Stanley Chesley, Ohio bar #852 (!), a litigator of some renown who has enough gravitas to impress most judges; and (3) the same four-person DC-based antitrust team from Cadwalader that also represents TradeComet.
I am struggling to make sense of myTriggers’ litigation choices. Assuming myTriggers even has the money, writing a $335k check to Google (and I bet Google would have taken less!) is almost assuredly cheaper than paying three law firms to mount an antitrust assault on a $20B/year behemoth. Assuming that myTriggers wants to maximize profits, then either (1) myTriggers thinks its odds are good enough that it will win AND make enough money to pay the 7 lawyers on the counterclaim's signature page plus their teams, or (2) the law firms struck an unbelievably sweet deal on fees.
Either way, how did Columbus-based myTriggers connect with the DC-based Cadwalader team? Did myTriggers independently call up Cadwalader? The TradeComet lawsuit got some press, but that was a year ago. If myTriggers really thought it had a case, it might have preemptively sued Google rather than waiting for Google to sue it. Did myTriggers get connected to Chesley for some reason, who then recommended bringing in Cadwalader? Did Cadwalader reach out to myTriggers? If so, I would like to know more how this happened and how this matter pertains to Cadwalader’s relationship to Microsoft. I’m also confused about the relative roles of Chesley and Cadwalader. It’s not immediately obvious why myTriggers would need both Chesley and Cadwalader or be willing to fund both.
Whatever the case, I suspect the antitrust claims caught Google flat-footed. A simple and low-stakes collections matter has blown up into a potentially significant lawsuit in an undesirable forum. Google chose Ohio state court for the collections matter despite its AdWords contract, so now it will have a tough time extricating itself from that court. But I suspect it would rather have an antitrust case in federal court, not state court—often (but not always) federal judges are more sophisticated than state judges and less susceptible to hometown bias. And I’m sure Google would rather fight antitrust claims on one of the coasts than in the Rust Belt, especially if myTriggers argues that Google’s evilness cost Ohioans jobs. Google probably didn’t mean to offer battle in this venue, but someone did a really good job of seizing the opportunity and forcing Google to fight the battle in a suboptimal setting.
Although myTriggers’ counterclaim is noticeably less well-drafted than the TradeComet complaint, the substance of myTriggers’ antitrust counterclaim is similar to the TradeComet allegations. The basic argument is the same: myTriggers argues that it’s a competitor to Google and that Google raised ad prices on a competitor to squelch it. As a result, I think this counterclaim is about as meritorious as TradeComet’s—in other words, not so much.
That said, the counterclaim had a couple of interesting factual allegations. First, in paragraph 12 of the counterclaim, myTriggers alleges that Google cuts special deals with selected shopbots so that their ads are not subject to the same quality scores as other advertisers. (Compare paragraph 101 of the TradeComet complaint, which wasn’t as clear that Google made these adjustments by agreement). I would love to see one of the Google-shopbot agreements containing such a clause. Could this just reflect some aspect of Google’s standard algorithm that automatically preferences shopbots without any agreement? Or could the NexTag alums know something we don't?
Second, Paragraphs 14-16 allege that Google manually maintains a “whitelist” that means that “neither Google nor its ‘search partners’ will eliminate the company from the market,” and a shopbot not placed on the whitelist “is forever blacklisted by Google and its ‘search partners.’” The TradeComet complaint doesn’t have a directly analogous allegation, and I have no idea what this means. If you have any thoughts or theories, I would welcome them. These allegations should be subject to the Ohio equivalent to Rule 11, so myTriggers should have some evidence in its possession to back this up. I would love to see that evidence.
February 16, 2010
Kozinski and Goldfoot on Cyberspace Exceptionalism and Internet Regulation
By Eric Goldman
Alex Kozinski & Josh Goldfoot, A Declaration of the Dependence of Cyberspace, 32 Colum. J.L. & Arts 365 (2009).
In early 1996, in response to Congress' enactment of the Communications Decency Act (the first comprehensive attempt to regulate the Internet), John Perry Barlow published his cyberspace exceptionalist screed, “A Declaration of the Independence of Cyberspace.” The manifesto (naively, IMO) tells government regulators that they are outdated and should not—and cannot—regulate the Internet.
Judge Kozinski, chief judge of the Ninth Circuit, and Josh Goldfoot, a trial attorney in the DOJ's CCIPs division, use Barlow's article as an entry point to discuss Internet exceptionalism/regulation generally. Although the article expresses the authors’ personal views, the article amplifies some themes Judge Kozinski has been developing in his recent Internet jurisprudence, most notably the Roommates.com case and Perfect 10 v. Visa. Because Judge Kozinski plays a crucial role on the federal appellate court governing both Hollywood and the Silicon Valley, this article is worth a close look.
Judge Kozinski made his distaste for Internet exceptionalism clear in the Roommates.com opinion. In this article, the authors explain this view more thoroughly:
It is a mistake to fall into Barlow's trap of believing that the set of human interactions that is conducted online can be neatly grouped together into a discrete “cyberspace” that operates under its own rules. Technological innovations give us new capabilities, but they don't change the fundamental ways that humans deal with each other....[W]hen the internet is involved in a controversy only because the parties happened to use it to communicate, new legal rules will rarely be necessary. When the substance of the offense is that something was communicated, then the harm occurs regardless of the tools used to communicate....[T]he vast majority of internet cases that have reached the courts have not required new legal rules to solve them.
While I generally agree with this, I also think it’s an antiquated sentiment. Whether or not cyberspace exceptionalist law are logical or even appropriate, legislators have found them irresistible, resulting in dozens or hundreds of Internet-specific statutes. I explore this dynamic in my article “The Third Wave of Internet Exceptionalism.” So to the extent the authors are arguing that we don’t need new cyberspace-specific laws, that ship sailed a long time ago.
The authors conclude that "the internet is doing wonderfully. It has survived speculative booms and busts, made millionaires out of many and, unfortunately, rude bloggers out of more than a few. The lack of a special internet civil code has not hurt its development."
I agree that the Internet is doing wonderfully, but I would assign causality differently. Legislatures in the 1990s passed a number of Internet-favorable laws, such as the Internet Tax Freedom Act, which kept taxing authorities from loving the Internet to death, and 47 USC 230, which provided a crucial immunity to online intermediaries. Reverse-engineering the Internet’s success is a tricky science, but my hypothesis is that the success is partially due to these “special Internet civil codes,” not due to their absence. For more on this with respect to 47 USC 230, see my talk notes from the Denver University Cyber Civil Rights event.
”Death of the Internet” and “Death of Innovation” Arguments
The authors address two common arguments that Internet defendants make to support favorable exceptionalist rulings, including that an adverse ruling (1) will end the Internet or (2) harm innovation.
They suggest that "end of the Internet" arguments can be powerful (specifically addressing Judge McKoewn’s doomsday concerns in her Roommates.com dissent):
The argument that a legal holding will bring the internet to a standstill makes most judges listen closely. Just think of the panic that was created when the Blackberry server went down for a few hours. No one in a black robe wants to be responsible for anything like that, and when intelligent, hard-working, thoughtful colleagues argue that this will be the effect of one of your rulings, you have to think long and hard about whether you want to go that way. It tests the courage of your convictions.
While end-of-the-Internet arguments can grab judges' attention, I have to assume that the litigant loses credibility if the claim is overstated. So use the argument sparingly, like when your client's loss will pry beloved Crackberries out of the judges' hands.
The authors are less impressed with the "death of innovation" argument.
[P]romoting innovation alone cannot be a sufficient justification for exempting innovators from the law. An unfortunate result of our complex legal system is that almost everyone is confused about what the law means, and everyone engaged in a business of any complexity at some point has to consult a lawyer. If the need to obey the law stifles innovation, that stifling is just another cost of having a society ruled by law. In this sense, the internet is no different than the pharmaceutical industry or the auto industry: they face formidable legal regulation, yet they continue to innovate.
There is an even more fundamental reason why it would be unwise to exempt the innovators who create the technology that will shape the course of our lives: granting them that exemption will yield a generation of technology that facilitates the behavior that our society has decided to prohibit. If the internet is still being developed, then we should do what we can to guide its development in a direction that promotes compliance with the law.
I’m sympathetic to this point. Personally, I feel like arguments that a ruling or law will harm “innovation” are often make-weight. “Innovation” is ill-defined and difficult to measure (i.e., some folks believe patent applications/issuances quantify innovation, but we know better), and it is politically incorrect to oppose “innovation” (you might as well oppose other incontrovertible ideals like freedom, Mother Teresa and puppies). Thus, the “harms innovation” argument automatically, and often unfairly, puts the opponent on the defensive—they can either try to debate what’s better for innovation or stand silently and look like they oppose innovation. But debates about what’s best for “innovation” are almost always irresolute because innovation can take many forms, and we do not know what precise mixture of government intervention and deregulation will foster socially optimal levels of innovation. For more on this, see, e.g., Niva Elkin-Koren and Eli Salzberger’s analysis of Coasean allocations on innovation.
At the same time, the authors’ arguments are a little disquieting because they imply that innovation can result in only one of two outcomes—legal or illegal, with nothing gray in between. From my perspective, much (most?) Internet entrepreneurship/“innovation” exists between the two endpoints of the legality continuum. For example, in 1996, I believe many legal experts would have said that unconsented spidering and indexing of a website was probably illegal (a question that has not been definitively resolved even today)—so if we wanted to avoid possibly illegal innovation, Google would not exist today. As a result, it might sound great to channel innovation towards only clearly legal activities, but I don’t really think that’s what we want.
Secondary Liability and Anonymity
The article also has some troubling remarks on secondary liability and anonymity:
If the legal rules change, and companies are held liable more often for what their users do, then the cost of anonymity would shift away from victims and toward the providers. In this world, providers will be more careful about identifying users. Perhaps online assertions of identity will be backed up with offline proof; providers will be more careful about providing potential scam artists in distant jurisdictions with the tools to practice their craft. All this would be expensive for service providers, but not as expensive as it is for injured parties today.
I would like to see some empirical support for the last sentence’s comparison of expenses. It’s not self-evident to me. Further, if we are going to do a cost accounting, we also need to consider what socially beneficial activity is dissuaded by service provider authentication of identity.
The authors continue:
Secondary liability should not reach every company that plays any hand in assisting the online wrong-doer, of course. Before secondary liability attaches, the plaintiff must show that the defendant provided a crucial service, knew of the illegal activity, and had a right and a cost-justified ability to control the infringer's actions. This rule will in almost every case exclude electrical utilities, landlords, and others whose contributions to illegal activity are minuscule.
This argument is consistent with traditional tort principles (as well as Judge Kozinski’s dissent in Perfect 10 v. Visa regarding copyright liability). 47 USC 230’s immunity breaks these venerable principles. As I’ve noted before, bright judges imbued in the common law can have a tough time with Congress’ rejection of traditional tort principles (as well as the concomitant reduction in judicial discretion).
Meanwhile, I’m wondering about the qualifier in the last sentence (“in almost every case”). Unless specified in a statute, I can’t imagine *any* circumstances where it would be appropriate to hold people who make “minuscule contributions” responsible for third party torts—especially electrical utilities, who as regulated monopolies usually have no discretion about whether or not to provide power to their customers.
Although in general most Ninth Circuit Internet rulings have reached the right result, recent Ninth Circuit rulings have shown some hostility towards 47 USC 230 specifically and Internet defendants generally. I am concerned that the Ninth Circuit has become a dangerous circuit for Internet defendants, and this article does not dispel my fears. I think Internet defendants should carefully weigh the pros and cons before appealing a case to the Ninth Circuit. The wild card factor is high, and the likelihood of getting an incomprehensible legal standard is higher still.
February 12, 2010
Ripoff Report Sues Blogger, Loses on Jurisdictional Grounds--Xcentric Ventures v. Bird
By Eric Goldman
Xcentric Ventures, LLC v. Bird, 2010 WL 447759 (D. Ariz. Feb. 3, 2010). See the initial complaint.
I usually find personal jurisdiction rulings mind-numbingly uninteresting, so I try my best to avoid them. However, some personal jurisdiction cases are exceptional, and this is one of them. It involves one of this blog's favorite litigants, the Ripoff Report, as a defamation plaintiff against a well-respected lawyer-blogger, Sarah Bird, COO and GC of SEOmoz.
The case involves Sarah's Jan. 2008 blog post on the SEOmoz blog entitled "The Anatomy of a RipOff Report Lawsuit." As you may know, many SEOs HATE Ripoff Report because of Ripoff Report's frustratingly high ranking in Google search results, which might be more attributable to Ripoff Report's venerability than its content quality. To cater to her audience's interest/fascination with Ripoff Report, Sarah undertook the Herculean effort of trying to catalog all of the Ripoff Report litigation she could find and narrate some of the litigation dynamics. It's a project I would not have undertaken because I know how long a project like that takes, but I was grateful she did the work and shared it with the rest of us. The post was a useful public service for researchers like me.
Perhaps not surprisingly given the overwhelming volume of information required to prepare her report, Sarah's post contained at least one factual error, which she subsequently admitted. The post also prompted a conversation between Sarah and Thomas Duffy, the Ripoff Report's former general counsel (a position now held by David Gingras). Sarah reported on their conversation in a follow-on post.
In my opinion, that should have been the end of it. Sarah undertook a near-impossible research task, made some errors, hashed out some issues with Ripoff Report and posted a follow-up. Instead, feeling that the article still encourages third party plaintiffs to bring false claims, Ripoff Report sued Sarah for defamation and "aiding and abetting" tortious acts by others.
This is not the first time that the Ripoff Report has gone on the defamation offensive. For example, I wrote about their lawsuit against the Phoenix New Times for an important work of investigative journalism they published. See my April 2008 blog post on that lawsuit and Thomas Duffy's response. So it isn't surprising that Ripoff Report sued Bird, but I still think it's an unfortunate turn of events.
It is also a lawsuit that could backfire. Every defamation lawsuit Ripoff Report brings could establish adverse legal precedent that increases the potential exposure of their own contributors--some of whom probably are not as careful as Sarah. In my opinion, this risk is doubly troublesome because contributors can't remove their reports from Ripoff Report, even if the contributor believes that taking down the content would reduce their liability.
In today's case, the court rejected the Ripoff Report's lawsuit against Bird due to her lack of personal jurisdiction in the Ripoff Report's home court of Arizona. Ripoff Report tried to establish jurisdiction using the Calder "Effects Test." The court, trying to read ambiguous 9th Circuit precedent interpreting that test, says:
mere knowledge of an individual's residence, combined with intentional posting of defamatory statements on the internet (which, taken together, makes it foreseeable an individual will be harmed in a certain forum location) does not amount to “express aiming.” Although what else is required is unclear, the express aiming requirement appears to demand a showing that there is at least some additional connection between the defamatory act and the forum.
Applying this standard, the court concludes "the Plaintiffs in this case have alleged no connection between the allegedly defamatory article and the forum other than that the article was about Plaintiffs and Defendants knew Plaintiffs resided in Arizona." Therefore, personal jurisdiction didn't attach.
It's clear from this opinion that the judge didn’t know what to do with the prevailing Ninth Circuit precedents. Surprise—another area where the Ninth Circuit has horked Internet law. As a result, the court's assessment of Ninth Circuit law is not completely free from doubt. However, it is also clear that this judge doesn't want to hear this case. In a footnote, the court notes the policy concerns about an expansive interpretation of the Calder Effects Test:
The ability to hale an internet user into a distant forum based on an allegation of intentional defamation could be used to chill free speech. It is true that a rule to the contrary could effectively deprive individuals who cannot afford to litigate outside their forum of a remedy for internet-based defamation....The harm caused by internet-based defamation can be quite severe and widespread, and conventional wisdom might be that one should have the right to remedy that harm without having to litigate in a distant forum. These concerns are mitigated, however, by the relative ease of modern air travel. It is also important to remember that at the pleading stage a plaintiff need only make a prima facie case of defamation. Requiring plaintiffs to bear the burden of traveling is consistent with requiring plaintiffs to bear the burden of proof at trial, and with the goal served by personal jurisdiction rules of preventing defendants from being unreasonably haled into a distant, and potentially biased, forum.
So true! Yet, judges rarely dismiss cases on jurisdictional grounds, so it's not clear how many other judges would embrace these sentiments.
I reached out to David Gingras, and he informed me that Ripoff Report is planning to appeal this ruling. There may be some legal ground to reevaluate the district court's reading of Ninth Circuit law, but going to the Ninth Circuit has some peril for Ripoff Report too (remember Kozinski's harrassthem.com example?). Stay tuned.
February 09, 2010
Catching Up With Wikipedia
By Eric Goldman
I recently posted the final published version of my article Wikipedia’s Labor Squeeze and its Consequences. In the course of updating the draft, I reviewed the news coverage of Wikipedia from the second half of 2009, and I thought I would share some of the more interesting tidbits that caught my eye.
Flagged Protection v. Flagged Revisions
The biggest news from the second half of 2009 was the August announcement from the Buenos Aires Wikimania that the English-language Wikipedia was rolling out Flagged Revisions for living people's biographies. It was first reported by Noam Cohen at the New York Times and then repeated in countless articles. I think the announcement took everyone by surprise because it seemed to come from out of the blue. Several other Wikipedia versions deploy Flagged Revisions, but before Wikimania, the consensus had been to try Flagged Protection and Patrolled Revisions--not Flagged Revisions--on the English-language Wikipedia.
[Some of you may be wondering: what's the difference? A lot! With Flagged Revisions, most user edits are invisible to logged-out readers until a more trusted editor approves them. Flagged Protection is a variation of Full Protection and Semi-Protection. Edits from less trusted users to protected articles are invisible to logged-out readers until a more trusted editor approves them. Flagged Protection substitutes for Full Protection, where less trusted users cannot make any edits to the protected articles whatsoever; and semi-protection, where only some users can make edits to those articles. The percentage of articles currently fully- or semi-protected is very low--I believe less than 1%--and presumably Flagged Protection would be used equally sparingly. In contrast, the media announcements indicated that Flagged Revisions would apply to all living people's biographies, a significant minority of Wikipedia entries. Patrolled Revisions is more procedural than substantive; it's merely a way for editors to communicate with each other that they have verified previous edits.]
The fact that Wikipedia apparently leapfrogged Flagged Protection to adopt the much more restrictive Flagged Revisions seemed like an ominous development and perhaps an indication that the vandals and spammers were winning more than we thought. The public angst over Wikipedia's announced move was deafening. People seemed to be genuinely shocked that Wikipedia might make a wholesale move from an open edit site to something substantially more restrictive.
However, this angst was misdirected. Despite multiple efforts to get clear information from Wikipedia (including this unbelievably confusing blog post), it turns out that the English-language Wikipedia isn't adopting Flagged Revisions at all but instead is proceeding with its trial of Flagged Protection. I think Farhad Manjoo properly captured our collective frustration in his TIME article entitled Jimmy Wales Quietly Edits Wikipedia’s New Edit Policy: “In several interviews, including many with TIME, officials at the Wikimedia Foundation, the nonprofit that manages Wikipedia, explained that the user-edited online encyclopedia would soon impose restrictions on articles about living people.” I really don't understand how or why Wikipedia successfully misdirected us for weeks, but they could have easily avoided a lot of confusion with a clearer announcement and more prompt corrections.
So the bottom line is that, despite the August drama, Wikipedia is not flipping its default to closed editing yet.
Nevertheless, as I explain in my Wikipedia Labor Squeezes article, Wikipedia inevitably will adopt more restrictive editing policies. It's just a matter of time. Accordingly, I will not be surprised when Wikipedia announces more restrictions. In contrast, judging from the reaction to the August announcement, I expect most people will be shocked when they next hear of Wikipedia's next effort to deploy a more restrictive editing policy.
In late August, Wired reported that Wikipedia was adopting Wikitrust, a tool that automatically color-codes edits to an entry to show the projected credibility of those edits. Thus, at a glance, a reader can tell which parts of an entry are more likely to be accurate and which parts should be scrutinized more closely. Amazingly, though, this was also a botched announcement (read the update to the Wired article). Wikipedia is only conducting a trial of Wikitrust.
A Couple Other Interesting Factoids
From New Scientist: ""Occasional" editors, those who make just a single edit a month, have 25 per cent of their changes erased, or reverted, by other editors, a proportion that in 2003 was 10 per cent. The revert rate for editors who make between two and nine changes a month grew from 5 to 15 per cent over the same period."
From NY Times: Wikipedia contributors are 80% male, 65%+ single, 85%+ childless and 70% under 30 years old. As I explain in my article, this is a group that will experience significant life changes, which in turn will reduce or eliminate the time they have to contribute to Wikipedia. Will sufficient numbers of replacements emerge?
From an article entitled The Singularity is Not Near: Slowing Growth of Wikipedia
- "the number of active editors and the number of edits, both measured monthly, has stopped growing since the beginning of 2007"
- "The rate of reverts-per-edits (or new contributions rejected) and the number of pages protected has kept increasing. Occasional editors experience a greater percentage of reverts per edits in comparison to the more prolific editors."
The Wall Street Journal Article
On November 23, the Wall Street Journal published an A1 article entitled I spoke with the reporter in June but my remarks only made the sidebar. The lead paragraph thesis is that "unprecedented numbers of the millions of online volunteers who write, edit and police [Wikipedia] are quitting." Citing research by Felipe Ortega, the article says that the English-language Wikipedia suffered a net editor loss of 49,000 in Q1 2009 (compared with a net loss of 4,900 in Q1 2008). It's worth reading the entire article. It stirred the pot quite a bit.
Erik Moeller, Wikipedia's Deputy Director, responded to the article with (among other things) a different cut at the numbers:
Studying the number of actual participants in a given month shows that Wikipedia participation as a whole has declined slightly from its peak 2.5 years ago, and has remained stable since then. (See WikiStats data for all Wikipedia languages combined.) On the English Wikipedia, the peak number of active editors (5 edits per month) was 54,510 in March 2007. After a more significant decline by about 25%, it has been stable over the last year at a level of approximately 40,000.
Consider this blog post my 4+ year check-in on my prediction in December 2005 that Wikipedia would fail in 5 years. In that post, I didn't tightly define what I meant by "failure," and frankly my "failure" rhetoric has been unintentionally and unnecessarily inflammatory. In all cases, I'm not rooting for Wikipedia's failure (however defined).
However, I remain baffled by the folks who are so enraptured by Wikipedia's mystique that they believe the site will defy gravity. Whatever you take away from the data points I cite in this post, I think it's undeniable that Wikipedia is changing in material ways. Bright minds might disagree about whether those changes are good or bad. From my perspective, Wikipedia's evolution has followed a fairly predictable path. Like many UGC websites, contributor activity peaks and then declines, and the transition from first generation contributors to second generation contributors naturally has some bumps. More structurally, Wikipedia has followed an entirely predictable evolution of progressively tighter editorial policies, and I anticipate even tighter editorial controls are come (to be accompanied by shocked public outcries each time).
As for my prediction, I'm waiting to see what develops this year, especially at Wikimania in August. Whether I'm right or wrong, I'll post my 5 year assessment of my prediction in December.
February 07, 2010
Third Circuit Schizophrenia Over Student Discipline for Fake MySpace Profiles
By Eric Goldman
Now that I’m a middle-aged man, I don’t find fake online profiles funny at all. But if the Internet had been around when I was in my early teens, I probably would have found a salacious fake online profile of a school authority figure side-splittingly funny. Unfortunately for today’s teens, the toxic brew of easy access to MySpace plus an underdeveloped sense of humor plus anti-authority sentiments has led to an unhealthy number of fake MySpace profiles—far too many of which result in lengthy and expensive court cases. In practice, the “joke” ends up being on the taxpayers.
The cases I’ve been seeing follow a predictable pattern:
Step 1: young teen creates a silly fake MySpace profile of a school authority figure.
Step 2: word of the MySpace profile spreads among the teen’s peers, and they have immature laughs.
Step 3: targeted authority figure discovers the fake profile.
Step 4: authority figure overreacts and metes out unnecessarily harsh discipline. Sometimes, the authority figure also sues the teen/parents or brings in the cops.
Step 5: the disciplined student brings a lawsuit for the authority figure’s overreaction.
Step 6: the parties’ lawyers win, but everyone else—the plaintiffs, defendants, and taxpayers—loses.
My opinion is that the world would be a better place if immature fake online profiles weren't created. It’s clear that any regulatory efforts to keep young teens off MySpace hasn’t eliminated this problem, and it’s unrealistic to expect that kids won’t find ways to pull silly stunts like this. Instead, our best hope for a better future rests at Step 4, when the authority figure decides whether to laugh off the fake profile or go on a rampage. Maybe over time school authority figures will be less shocked by fake online profiles and will learn not to take it personally. In an ideal world, authority figures will figure out how to use these immature stunts as a valuable opportunity to teach about online information credibility and the harms of cyberbullying.
When they get to court, judges are struggling with fake online profile cases, as best evidenced by a pair of Third Circuit cases issued on the same day last week. Two different panels reached different conclusions about whether school discipline for an off-campus-created fake MySpace profile violated the student's First Amendment rights (Layshock says yes, J.S. says no in a 2-1 split opinion). Why the difference?
Surprisingly, the two opinions don’t reconcile their outcomes very well. The Layshock case doesn’t acknowledge the J.S. opinion. The J.S. case does acknowledge Layshock, but only in a footnote:
we find the two cases distinguishable. Unlike the instant case, the school district in Layshock did not argue on appeal that there was, under Tinker, a nexus between the student's speech and a substantial disruption of the school environment.… This nexus, under Tinker, is the basis of our holding in the instant case. Rather, the Layshock panel held that the school district failed to establish that a sufficient nexus existed between the student's creation and distribution of the profile and the school district so that the district was permitted to regulate the student's conduct…. That panel also held, under Frazer, that the student's speech could not be considered "on-campus" speech just because it was targeted at the Principal and other members of the school community and it was reasonably foreseeable that school district and Principal would learn about the MySpace profile.
Obviously each case is fact-specific, so the dichotomous conclusions about the on-campus effects might be explained by factual differences in the fake profiles. From my perspective, the differences seemed pretty small, especially because I ignored the more outrageous fact-like claims as typical adolescent hyperbole. Putting aside the profile contents, I’d argue that Layshock’s on-campus effects were greater because: (1) his was just one entry in a series of fake profiles, and (2) the MySpace profiles were accessible on campus, while in contrast the school successfully blocked on-campus MySpace access in the J.S. case. In any case, any on-campus effects may depend on a number of factors out of the student’s control, including how the principal (over)reacts.
In the end, I think the main distinction between the two cases is that the J.S. court indulged its moral condemnation of the student’s poor choices, while the Layshock court kept its outrage in check. Another possible explanation is that the principal's response in Layshock was outrageously punitive—Layshock was diverted into 3 hours/day of special classes for behaviorally disruptive students even though Layshock was a gifted student normally taking AP classes—and applied inconsistently because the students who created the other fake profiles weren’t disciplined at all. Finally, the cases reached the Third Circuit panels in different procedural postures (e.g., Layshock won in district court on the First Amendment issue while J.S. lost in district court), and the respective school districts adopted different litigation strategies that appeared to have different efficacies.
Because the two Third Circuit opinions reach fact-specific outcomes without clearly distinguishing between the two factual circumstances at issue, the Third Circuit’s dichotomous rulings create a lot of uncertainty that will lead to more frequent, longer-lasting and unproductive courts battles over student-created fake online profiles. As a result, these two cases would benefit from an en banc rehearing. Or perhaps the Third Circuit intentionally or unintentionally created a split (an unusual intra-circuit split) to tee this issue up for Supreme Court review.
Some topically related blog posts:
* Principal Loses Lawsuit Against Students and Parents Over Fake MySpace Page--Draker v. Schreiber
* Teenager Busted for Creating Fake "News" Story
* Social Networking Sites and the Law
* Moreno v. Hanford Sentinel, another possibly vindictive principal response to a MySpace posting
February 02, 2010
FTC Privacy Roundtable Recap
By Eric Goldman
[Introductory note: I have repeatedly criticized the FTC on this blog, and this post may implicitly criticize them as well. At the same time, I want to share a couple of compliments for the FTC. First, the FTC did a terrific job preparing for this event. For the panel I participated on, we had two official group organizing calls, plus I had at least 3 individual calls as well. I can’t recall another event which had more pre-event preparation efforts. Second, I remain consistently impressed with the dedication of the FTC staff attorneys. The FTC attorneys I've met uniformly seem to be trying to do the right thing, even if bright minds might disagree about what that is.]
Last week, the FTC held the second of three privacy roundtables at UC Berkeley. A large crowd (I estimate 200+ people) showed up, and I know that many other people watched online. Combined with my conversations with the FTC folks prior to the event, I took away a few meta-observations:
1) The FTC is Facebook-obsessed. FTC staff kept citing Facebook examples. It's clear that the FTC is paying extraordinarily close attention to Facebook.
2) The FTC has embraced the idea of "data as currency." The concept is that online services that don't make consumers pay with cash instead make consumers "pay" by providing their personal data. This didn't come up much at the second roundtable, although I understand it was a big issue at the first.
It's a little dispiriting to see this argument gain traction. I have repeatedly criticized this concept before (see my Coasean Analysis of Marketing and Data Mining and Attention Consumption articles), so I will only briefly recap its deficiencies here. Basically, the concept treats the provision of personal data as an automatic detriment to the consumer, which creates a zero-sum game—just like the transfer of cash, the service provider wins at the consumer's expense. Although consumers may suffer negative consequences from providing their personal data to service providers, the overall concept is wrong because many service provider-consumer relationships are "win-win" where both the consumer and the service provider are better off due to the data transfer. I build some economic formulas in my articles to explain these scenarios with more rigor. Win-win can occur, for example, if the service provider can provide better services to the consumer based on access to personal data. Personalized search is one example. Ultimately, any policy proposals predicated on treating data as currency are likely to overregulate by reducing or eliminating potential win-win scenarios.
3) The term "privacy enhancing technologies" or PETs lacks a consensus definition. Because we didn't agree on what qualifies as a PET, we couldn't determine if they had been successful or not.
Construed narrowly as add-on technologies that guard against specific vectors of privacy intrusions, it's clear that PETs have failed as a mass-market offering. Hardcore privacy folks may seek out tools that advance their interests, and they may even be willing to pay for those tools, but most folks don't care enough to pursue such solutions--even those available for free. (I highlight this tension in my 2002 Forbes editorial.)
However, if we construe PETs more broadly, they have been massively successful. For example, I would consider anti-spam/anti-spyware/anti-virus software as PETs. Obviously those software programs have other benefits, such as security protection, but they solve a variety of privacy-related problems too. For example, my Gmail spam filter learns my preferences and, over time, blocks some types of unwanted emails (such as repeat emails meant for other “egoldman”s like Emma Goldman) from showing up in my in-box. Similarly, PETs have been incorporated into the browsers and provide default protection to their users. If we can get past the one-off single-vector conception of PETs, we may find lots of successful examples.
4) The online "privacy" dialogue hasn't advanced very far in the past 15 years. I felt like much of the 2010 roundtable's discussion would have been apropos 15 years ago. For example, instead of discussing cookies in 1995, in 2010 we are discussing flash cookies and supercookies. There's no real difference in the underlying principles; we're simply at a new point in the technological arms race. Just like technology evolved to provide user control over cookies, it will eventually catch up to flash cookies and supercookies and super-duper-cookies or whatever the next iteration of persistent client-side identifiers is called. Unless we look past the specific technological implementations and focus on broader concepts, we are doomed to repeat the same conversations.
5) Due to the semantic ambiguity of the word "privacy," "privacy" inquiries are guaranteed to fail. Ultimately, I found much of the roundtable discussion unenlightening because the "privacy" umbrella is too broad and ambiguous. From my perspective, the term "privacy" is always fatally ambiguous to any productive conversation; I just don't understand what it means. As a result, at the roundtable, panelists were simultaneously discussing privacy, security, anonymity and a variety of other concepts. The result was a jumbled doctrinal mess and a lot of talking past each other.
At the same time, the "privacy" umbrella hindered the inclusion of non-privacy concepts that might have helped overcome the deja vu tendency. The panel titles were:
Panel 1: "technology and privacy"
Panel 2: "privacy implications of social networking and other platform providers"
Panel 3: "privacy implications of cloud computing"
Panel 4: "privacy implications of mobile computing"
Panel 5: "technology and policy"
My latest project on reputation is relevant to the issues discussed at the roundtable, but where does "reputation" fit into these panels? Everywhere--and nowhere. Similarly, I was hoping to discuss the implications of 47 USC 230(c)(2), the immunization for filtering technologies, but where does that fit in? I hoped to discuss it in the first panel but we ran out of time. Using a classic "privacy" structure for the discussion implicitly stifles these important non-privacy considerations from emerging. As a result, this structure almost guarantees a "same old, same old" discussion by precluding new concepts from joining the discourse.
Before the panel, lame-duck Commissioner Pamela Jones Harbour gave some opening remarks. She expressed displeasure with Facebook's resetting of privacy defaults and disagreed with Mark Zuckerberg's quoted remarks that the technology change reflects emerging social attitudes. She also gave a lengthy shout-out to Paul Ohm's paper on de-anonymization/re-identification of non-PII. Note that we will have an evening panel event featuring Paul Ohm at SCU on April 7. Please put that on your calendar now. Paul's paper is already affecting the considerations of FTC Commissioners; come hear what the fuss is about.
After Commissioner Harbour, David Vladeck (head of the FTC's Bureau of Consumer Protection) gave some opening remarks as well. He summarized three conclusions from the first roundtable:
* Consumers don’t understand commercial information-collection practices (ex: data brokers, behavioral targeting).
* Lengthy privacy policies aren’t effective, but privacy disclosures are important.
* Consumers care about privacy.
He concluded his remarks with an ominous threat. He noted that the FTC continues to bring privacy-related enforcement actions, and in particular (a quote from his prepared remarks) "we are currently examining practices that undermine the effectiveness of tools consumers can use to opt out of behavioral advertising, and we hope to announce law enforcement actions in this area this year." I'm not sure what this means. Perhaps the FTC is fed up with NAI's behavioral ad network opt-out tool? I have not been able to make the tool work properly for years.
Finally, I'll mention a few thoughts from the social networking panel, which featured Erika Rottenberg of LinkedIn, Nicole Wong of Google and Tim Sparapani of Facebook. Given all the Facebook-bashing throughout the day, Tim was in the hot seat!
One of Tim’s talking points was that 35% of users customized their privacy settings in response to Facebook's privacy default resetting and its subsequent requirement that they review the settings. 35% user participation would be a remarkably high percentage for any website, and it’s incredible for Facebook with 350M claimed users.
Tim's other talking points didn't go over as well. He claimed that there are no barriers to entry for other social networking sites. This is technically true but woefully incomplete. It could very well be that the optimal number of social networking sites that consumers can actively embrace is precisely one, and there is good reasons to believe that social networking sites experience powerful network effects. See, e.g., Reuter's article about the tipping point between MySpace and Facebook.
Further, although the friendship relations are sticky, Facebook’s real stickiness comes from the self-published content on Facebook that cannot be exported to another site. Tim completely chunked the question about data portability from Facebook, slavishly espousing his talking point that Facebook will delete user accounts on their request--a non-sequitur that made most people in the audience quietly groan. We all understand that Facebook will kill content upon request, but the question on the table was how Facebook will allow users to move their extensive content to a competitor. Tim ducked that question because Facebook doesn't enable it. Facebook does not offer a front door for data portability, and Facebook has been shutting down the backdoor by suing folks like Power.com who try to create an unsanctioned portability method. To be clear, I'm not 100% convinced that Power.com is the good guy in that dispute, but I'm pretty confident that Facebook doesn't tolerate backdoor data portability.
Even so, I think Facebook's biggest threat is itself. Few users will get so mad that they will delete their accounts (I still have my Orkut and Friendster accounts, for example). Instead, Facebook should be concerned that users will simply reduce their usage because they get burned out or lose trust in Facebook. Ultimately this will cause users to migrate elsewhere, so the end game for Facebook could be a whimper, not a bang.
As an example of this latter phenomenon, Tim’s talking points claimed that Facebook gives users control over who they want to share every piece of data at the time they publish the data. He rightly praised this granularity but I am still grumbly that Facebook killed the setting that kept my comments and likes off my profile page. Now, if I don't want those items to show, I have to manually delete each one. So I do have control over my publications as Tim touted, but the additional transaction costs cause me to comment on and like other posts less frequently than I used to. This seems like more of a bug than a feature in my book.
In contrast to Facebook, Nicole Wong hammered the point that Google embraces data portability and builds it into the design of many of its services. As she said (I'm paraphrasing her), because users can leave with a click, we have to better with every product every day, and it makes us build better products. That's the spirit! Facebook, are you listening?
February 01, 2010
Google Street View Lawsuit Revived, But Only on Trespass Grounds--Boring v. Google
By Eric Goldman
Boring v. Google Inc., 2010 WL 318281 (3rd Cir. Jan. 28, 2010).
You may recall the book project A Day in the Life of America [Amazon affiliates link], which published what 200 photojournalists saw on May 2, 1986. The book provided a great snapshot of Americana, both sensational and banal. As a dataset, Google's Street View reminds me a lot of that book. The Google camera cars automatically capture whatever they see, which in some cases can lead to unintentionally amusing results. See, for example, this list of 20 crimes captured on Google Street View and the Huffington Post's list of "Craziest Google Street View Shots OF ALL TIME."
Inevitably, some people are going to be unhappy with whatever Google's camera cars indiscriminately captured and published. The plaintiffs in this case, Aaron and Christine Boring, are Pennsylvania homeowners with a reclusive streak. The Google camera car drove down the Borings' private driveway (allegedly ignoring the Borings' signage), took pictures of their house and published the photos through Google Street View.
The Borings were not satisfied with exercising Google's opt-out mechanism and instead made a federal case out of Google's transgressions. However, the district court was not impressed and kicked the Borings out of court.
The Borings appealed to the Third Circuit, which rewarded them with a small window of opportunity. The district court had rejected the Borings' trespass claims because they had not adequately alleged damage from the trespass. The appellate court reversed this point, saying a real property owner does not need to allege damage in order to state a valid trespass claim. As the court says, "Here, the Borings have alleged that Google entered upon their property without permission. If proven, that is a trespass, pure and simple." I'm not a real property expert, but this sounds right to me. The district court cited an 1899 case in support of its ruling, but the appellate court said that precedent was inapplicable.
Thus, the trespass claim survives a 12(b)(6) motion to dismiss, and the case gets sent back to the district court. While the appellate court expressly didn't tell the judge what to do, it's pretty clear that the appellate court doubts that the Borings will be able to assert any cognizable damage. As the court says, "it may well be that, when it comes to proving damages from the alleged trespass, the Borings are left to collect one dollar and whatever sense of vindication that may bring." My guess is that's the best possible outcome for the Borings.
In my opinion, the court's rejection of the Borings' privacy claims is the more interesting cyberlaw development. The court sensibly concludes that any violation suffered by the Borings would not highly offend a reasonable person. In other words, the Borings overreacted in a way the law does not recognize.
Given that the Borings weren't depicted in the photos, the court's ruling suggests that publishing online photos of private property categorically can't qualify as a privacy violation, whether the photos are taken on public or private property. The court's ruling, however, leaves open the possibility that depicting people in the photos might still be actionable--a question not before the court.
While the case has been revived, it's entirely clear to me that the Borings will not find much success on remand. Nevertheless, to save the litigation costs, Google ought to write a small check to settle the case, and the Borings would be prudent to take it rather than wait for the inevitable judicial denouement. To avoid further unwanted intrusions, they should use their settlement money to buy a gate for their driveway.