December 30, 2006
Google Wins Publisher's Lawsuit over AdSense Termination--Bradley v. Google
By Eric Goldman
Bradley v. Google, Inc., 2006 WL 3798134 (N.D. Cal. Dec. 22, 2006)
I'm struggling with developing a policy regarding blogging about pro se lawsuits (i.e., lawsuits brought without a lawyer) against major companies like Google. On the one hand, often they are obviously futile money-grabs, so blogging about them gives them more legitimacy than they deserve; or in other cases, the lawsuits are just trolling for publicity, and blogging on them contributes to the desired but undeserved payoff. On the other hand, these cases often facilitate the development of legal precedence. For example, major companies can use pro se lawsuits to achieve early landslide wins that can be cited when meatier cases arise.
All this is a long way of saying that I wonder if this lawsuit is truly blog-worthy; if you have strong views, please email me and let me know. (Sorry, comments are still down after the comment spam attack).
In this case, Theresa Bradley was a website publisher who signed up for AdSense. She also has a JD (don't get me started on the unique problems posed by lawyers as plaintiffs) and is a habitual plaintiff, with 35+ lawsuits under her belt. Ever curious, she clicked on the AdSense-served ads displayed on her website to see who was advertising there. She investigated the ads often enough that Google kicked her out of the AdSense program. Although the opinion doesn't use the term "click fraud," that's what many people would call this behavior. The opinion implies that her AdSense account had $5 in it at the time of termination.
She fought back in court, bringing the equivalent of a wrongful termination lawsuit. She also claims that Google wiped out incriminating evidence from her Gmail account. The court rejected almost all of her claims with limited ability to amend; but her claim that Google destroyed her personal property by deleting emails survives even if she doesn't amend. It could be a significant development if email deletion qualifies as the destruction of personal property, but based on the early stage in this case, the court hasn't reached that conclusion yet (although it may be consistent with Ninth Circuit precedent).
Despite the fact that Bradley's suit wasn't thrown out entirely, the lesson is clear--AdSense publishers will not have legal recourse if they get kicked out of AdSense. Then again, every AdSense publisher knows not to click on their own ads...
UPDATE: Rebecca takes issue with the court's reasoning.
December 29, 2006
My Wikipedia Page is Safe (For Now...)
By Eric Goldman
In my last post on Wikipedia, I mentioned that my personal Wikipedia page had been tagged "article lacks information on the importance of the subject matter." Shortly thereafter, things took the inevitable turn for the worse--the page got nominated for deletion. The Wikipedians huddled on a discussion page and failed to reach a consensus: 7 voted to keep the page, and 7 voted to delete it and/or merge it into another page. Apparently, this preserves the status quo, so my Wikipedia page was saved from deletion for now. It's not exactly the most ringing endorsement, but it's better than having to report to mom that Wikipedia deemed me "non-notable"!
UPDATE: I feel a little better now that I know Matt Cutts was nominated for deletion too. This is the problem of having contributions from people who aren't experts in the applicable area.
December 22, 2006
Top Cyberlaw Developments for 2006 – Part 2
By John Ottaviani
(Eric Goldman is away until the New Year. He left me the keys to the blog. I warned him that this may be like leaving the teenagers the keys to the house when the parents go away for the weekend!)
As Eric pointed out, our “Top Ten Cyberlaw Developments for 2006” list left out several notable developments. Here are a few more that were “near misses” for the list. In no particular order of importance:
· Electronic Voting – There was a lot of buzz about electronic voting and the perceived failures of the various systems. Given the proliferation of machine-human interfaces that we encounter on a daily basis, it is difficult to comprehend why problems continue to plague this industry.
· Apple v. Does – A California state appeals court held that online journalists had the same right to protect the confidentiality of their sources as offline reporters do under California’s reporters’ shield law. This result is not surprising, but it appears to be the first formal confirmation that courts would apply the same rules to traditional and online reporters. In addition, the court ruled that the federal Stored Communications Act does not permit a civil subpoena of stored e-mail from a service provider, only direct subpoenas from the account holders.
· Snow v. DirecTV – In June, the 11th Circuit held that, in order to be protected by the Stored Communications Act, an Internet website must be configured in some way as to limit ready access by the general public. An anti-DirecTV activist had created a public bulletin board, with a banner containing purported terms of service forbidding DirecTV representatives from entering the site or using its message board. However, the site was configured such that anyone in the public (including the DirecTV representatives) could enter the site, create a profile and use the message board. The court recognized Congress’s intent not to criminalize or create civil liability for acts of individuals who “intercept” or “access” communications or websites that otherwise readily are accessible by the general public. The court suggested that even a statement in the complaint that a plaintiff screens the registrants before granting access may have been sufficient to infer that the site was not configured to be readily accessible to the general public. However, in the absence of any such statements, the court granted DirecTV’s motion to dismiss for failure to state a claim. As a result, website operators who want to take advantage of the provisions of the Stored Communications Act must take some affirmative actions to be able to demonstrate that the website was not configured to be readily accessible to the general public. Relying on those who are not the website’s intended users to voluntarily excuse themselves will not be sufficient.
· eBay v. MercExchange – In May, the U.S. Supreme Court ruled that, once a patent is found valid and infringed, an injunction does not automatically have to be issued. Trial judges are free to weigh competing factors, including the effect of enforcing a patent on the public interest, as the trial judges do in other injunction proceedings. The case revolved around eBay’s “buy it now” feature, which allows customers to purchase items without participating in an auction. In 2003, a jury found that this feature infringed on two of MercExchange’s patents. The Supreme Court’s decision requires the patent owners show “irreparable injury” resulting from defendant’s infringement in order to receive injunctive relief. While this standard should be relatively straightforward for patent owners who practice their technology, the decision may lessen the ability of patent owners who don’t practice their inventions to obtain an injunction (or threaten to obtain one as a negotiating tool).
If anyone else has any Cyberlaw developments that they feel should be on the “Top Ten” list, please feel free to let us know!
Our list of “Top Cyberspace Intellectual Property Cases” for 2006 will be available in January.
December 15, 2006
Top Cyberlaw Developments of 2006
By Eric Goldman and John Ottaviani
[Eric’s Note: I will be in Israel for the rest of the year. So while it’s a little premature to publish an end-of-the-year recap, this may be my last post for the year. John O. has the keys to the blog in my absence, so please keep coming back to see what he has to say. I’ll see you in 2007!]
For several years, we have developed a list of the top 10 Internet IP cases in the previous year (see the 2005 list). While we hope to continue that tradition, this year had a number of noteworthy Cyberlaw developments that seemed worth cataloging...and besides, who doesn’t like top 10 lists?
Before we get into the list, some developments we deliberately left off the list:
* National Federation for the Blind v. Target. Some have claimed that this case requires websites to comply with the ADA. At best, this overreads the case. The case reasoning only applies to entities that tightly integrate their websites with their physical retail stores. As a result, it should not apply to pure e-commerce sites, online content publishers or physical retailers that do not integrate their stores with the website.
* Grokster trial court ruling on remand. In September, the trial court held that StreamCast induced infringement. While the Grokster Supreme Court ruling was the Cyberlaw event of 2005, StreamCast’s loss on summary judgment was a non-event as the Supreme Court practically ordered the district court to rule for the plaintiffs.
* Congress’ failure to act on topics such as adware regulation and network neutrality. Keeping Congress from regulating the Internet is always noteworthy, but it can be hard to grasp the significance of non-events.
With those caveats in mind, our vote for the top 10 Cyberlaw developments of 2006:
#10: KinderStart v. Google.
Every online marketer wants a better search engine ranking, and if necessary, they’ll go to court to get it. After losing a lot of free Google traffic, KinderStart sought to restore its placement with a broad legal attack on Google’s ranking methodology. If a legal attack on search engine algorithm ever succeeds, expect to see a litigation tsunami from online marketers willing to invest in lawyers instead of SEM/SEO. Fortunately, they won’t succeed. In July, the judge dismissed KinderStart’s claims with a leave to amend; the ruling on KinderStart’s amended complaint is imminent, but its prognosis is also dim. This means that search engines can continue to manage their algorithms to maximize relevancy for consumers, rather than having judges tell them how to build their algorithms.
#9: EU Convention on Cybercrime.
The US ratified this treaty in August, joining a number of other countries to combat international cybercrime. However, this treaty may have unexpected and adverse procedural and substantive consequences. Procedurally, Declan believes that the treaty means that “Internet providers must cooperate with electronic searches and seizures without reimbursement; the FBI must conduct electronic surveillance ‘in real time’ on behalf of another government; U.S. businesses can be slapped with ‘expedited preservation’ orders preventing them from routinely deleting logs or other data.” Substantively, the EFF believes that the treaty allows US citizens to be prosecuted for breaking foreign country’s laws even if the behavior isn’t illegal in the US. Thus, they say “the Cybercrime treaty would introduce not just one bad Internet law into America's lawbooks, but invite the enforcement of all the world's worst Internet laws.” We’re not experts in international law, and it’s possible that these predictions overstate the substantive legal effect of signing an international treaty. Nevertheless, to the extent that the Cybercrime treaty removes some legal “borders” from enforcement on the borderless Internet, it is an important development.
#8: Click Fraud settlements.
The topic of click fraud just will not go away. The legal risks associated with it—potentially billions of dollars worth—cast a shadow over search engine valuations. Thus, when Google settled its past click fraud liability for <$90 million (of which only $30 million was out-of-pocket cash) and Yahoo settled its past liability for advertising credits plus $5 million of out-of-pocket cash, champagne corks popped throughout the Silicon Valley.
#7: Search Engines and Privacy (Gonzales v. Google; AOL Search Data leak).
John Battelle calls search engine logs the “database of intentions,” and lots of people would like to know more about searchers’ intentions—including law enforcement or the government thirsty to know more about its citizens. For example, in summer 2005, the DOJ asked Google to hand over lots of search data as part of the DOJ’s efforts to defend the Child Online Protection Act. Unlike the other search engines (who received similar requests), Google fought back—putatively to protect its users’ privacy, although Google was also concerned about engineers’ workloads and protecting Google’s trade secrets. In March, Google and the DOJ battled to a draw, with the judge issuing a Solomonic ruling that gave the DOJ some data (but not very much). This ruling caused the issue to fade a little until August, when AOL released a huge chunk of search engine log data in an effort to provide researchers with a useful dataset. Although the data was putatively anonymized, some enterprising reporters used the data to identify some searchers based on their search terms. AOL quickly backtracked and some heads rolled at AOL, but lawsuits are pending and the issue of search engines and privacy remains very, very hot. We guarantee that it will resurface in 2007.
#6: Trademark Dilution Revision Act.
The TDRA isn’t specific to the Internet, but it routinely arises in Internet trademark cases. For a while following the Moseley Supreme Court decision, dilution became somewhat of an afterthought. The TDRA revives dilution as a viable claim, so we expect plenty of litigation over dilution in Internet trademark cases in the future. However, the TDRA’s more rigorous definition of famous marks may restrict the impact of the revitalized dilution theory.
#5: Omega Travel v. Mummagraphics.
In a relatively short opinion in November, the Fourth Circuit undid three years of state anti-spam legislative activity, virtually eliminated one of the anti-spam litigants’ favorite CAN-SPAM provisions (the prohibition on forged headers) and (by extending the California Supreme Court’s Intel v. Hamidi holding to the Fourth Circuit) limited anti-spammers’ claims for common law trespass to chattels. A hat trick of wins for email marketers.
#4: Unlawful Internet Gambling Enforcement Act of 2006.
This law forces payment systems to starve Internet gambling sites of cash. While the law’s poor drafting makes it a litigator’s dream, we don’t expect a lot of litigation interpreting it. Instead, financial services will likely interpret the law conservatively, proactively cutting off money flows to even potentially legitimate sites. So even if the law’s legal requirements are unclear, the law will change the complexion of Internet gambling.
#3: Keyword Triggering and Trademark Law.
This year, five cases addressed the topic of trademark liability for using trademarked keywords to trigger ads:
* Edina Realty—advertiser’s purchase of trademarked keyword was a trademark use in commerce but plaintiff did not get SJ on likelihood of confusion. The case subsequently settled.
* Merck—directly contradicting Edina Realty, the Merck court said that an advertiser’s purchase of trademarked keyword was not a trademark use in commerce. In a subsequent rehearing, the court emphasized that it really meant to reject the Edina Realty case.
* 800-JR Cigar—search engine selling trademarked keywords made a trademark use in commerce, but plaintiff did not win SJ on likelihood of confusion.
* Rescuecom—search engine selling trademarked keyword did not make a trademark use in commerce. This case is noteworthy because it extended the landmark Second Circuit 1-800 Contacts v. WhenU case from its adware context to the search engine context. This case is on appeal to the Second Circuit (disclosure: Eric expects to file an amicus brief on behalf of Google in that case).
* Buying for the Home—an advertiser’s purchase of trademarked keyword was a trademark use in commerce, but in a counterclaim, the plaintiff’s purchase of the defendant’s keyword was excused as nominative fair use.
Given the inconsistency of these rulings, there’s really not much to say except that this topic will remain hot in 2007…
#2: Search Engines and Copyright.
While everyone was obsessing about Google Book Search, there was a troika of important Google copyright cases from early 2006: Field (Google’s “cache” function), Perfect 10 (image search) and Parker (Google Groups/Usenet). Field was a dream ruling for Google; they couldn’t have done better if they had written the opinion themselves. It adopted five different rationales why Google wasn’t liable for putting copyrighted works in Google’s misnomered “cache,” including the provocative conclusion that Google lacks volition when it delivers copies out of Google’s “cache.” The Parker case showed the power of the Field precedent, because the judge repeatedly rejected the plaintiff’s arguments with cites to the Field case. On the other hand, the Perfect 10 case resulted in a dangerous loss for Google, putting its image search function at legal risk (and, by implication, threatening many other core search engine functions). Looking at the Field and Perfect 10 cases side-by-side, there is no way to reconcile the Field and Perfect 10 cases regarding Google’s volition (the Perfect 10 case doesn’t even address the concept). The Perfect 10 case is on appeal to the Ninth Circuit (along with two other Perfect 10 cases involving Visa and ccBill), so maybe the Ninth Circuit will resolve the inconsistency. If the Ninth Circuit upholds the district court ruling in Perfect 10 v. Google, this could have significant ramifications for the basic operation of search engines.
#1: Barrett v. Rosenthal.
Since the Internet’s beginning, plaintiffs have sought to hold intermediaries liable for user content. However, since 1996, these efforts have run into the 47 USC 230 brick wall, as interpreted by the Zeran case, which held that 230 preempts both publisher and distributor liability. However, two appellate courts—both in California—expressly disagreed with Zeran and held (as opposed to ruminating in dicta) that 230 does not preempt distributor liability: the Grace case, which the California Supreme Court later depublished, and the Barrett case. In this opinion, the California Supreme Court overturns those cases, embraces Zeran as California law, and emphatically slams the door on plaintiffs’ attempts to plead around 230. Without either the Grace or Barrett cases to cite, plaintiffs are left only with precedential scraps—some law review literature and some caselaw dicta—in their efforts to hold intermediaries liable. Despite this, no doubt that plaintiffs will keep trying, but eventually courts will have to consider Rule 11 sanctions when plaintiffs raise tired arguments that were rejected over nine years ago.
December 14, 2006
Site Outage and Comments
By Eric Goldman
The website and blogs had a 4+ hour outage today. My web host pulled the plug (sadly, without warning me) because "it was crashing the server with 250 other customers on it. Something on your website is causing very high server loads." We haven't been able to determine the cause yet, although comment spammers are the most likely culprit. So to convince the web host to put me back online, I agreed to shut down comments for now. Because I'm going to be in Israel for most of the rest of the year, I'm going to leave comments off until January. I'll reassess what to do then. Sorry for any inconvenience if you tried to access the blog and got the scary "this website has been suspended" notice.
December 13, 2006
Unlawful Internet Gambling Enforcement Act of 2006
By Eric Goldman
No one is better at coughing up legislative hairballs than Congress. The Unlawful Internet Gambling Enforcement Act of 2006 (grafted to the end of the SAFE Port Act) was passed over 2 months ago, but my repeated attempts to blog on it have been stymied by its Byzantine drafting. If you want a flagship example of how special interest lobbying combined with legislative mumbling can produce an unreadable mess, check out this beauty.
The statute assumes that some gambling is unlawful under state/federal law, but it doesn’t say what. For example, there is a split of authority about whether the Federal Wire Act (18 USC 1084) already prohibited Internet gambling. This law doesn’t answer that question for us, although the statute (in response to special interest lobbying) “helpfully” excludes a number of specific gaming-related activities from its purview.
Because of the existing legal uncertainty and this statute’s deliberate decision not to address the uncertainty (see 31 USC 5361(b), saying that this law doesn’t change any state/federal/tribal law prohibiting or permitting gambling), no one knows with confidence what actually constitutes illegal Internet gambling. Despite this, Congress prohibits those engaged in the business of betting/wagering (an effectively undefined term) from accepting money in connection with illegal Internet gambling. In other words, Congress can’t figure out what’s illegal, but it’s happy to require some financial gatekeepers to make those decisions for it. There is some rulemaking to work out the procedures for how money should be blocked (AG Gonzales and the Federal Reserve Board get the pleasure of drafting those), so we’ll have to see what the rules say before we can tell how conservative financial gatekeepers will become.
To me, the more interesting piece relates to liability for interactive computer services. As a starting point, 47 USC 230 already immunizes ICSs from any liability based on state gambling laws or any federal civil laws related to gambling. However, 230 does not insulate ICSs against federal criminal laws. Thus, for example, if the Wire Act applies to Internet gambling, 230 would not apply, and ICSs could be criminally liable for third party gambling activity.
The statute partially reduces the 230 limitations by allowing the DOJ or state AGs to seek a court order requiring ICSs to take down a lawbreaking website. 31 USC 5365(c). Without this statutory exception, 230 should have barred any civil orders. At the same time, the statute appears to expand 230 protection to eliminate ICS liability under the Wire Act unless the ICS has “actual knowledge and control of bets and wagers” and owns or operates an illegal gambling website. I’m not exactly sure it means to have “actual knowledge and control of bets and wagers,” but my suspicion is that this defines a very narrow universe of activities. So, on balance, it looks like this law may have slightly expanded ICS immunization by providing some limits on ICS liability for third party criminal gambling activities.
December 05, 2006
Wikipedia Will Fail in Four Years
By Eric Goldman
About a year ago, I predicted that Wikipedia will fail in 5 years. My logic:
* As Wikipedia traffic grows, it becomes a juicier target for marketers seeking to promote themselves (see the analogous problems Digg is experiencing with results gaming as it gains more traffic)
* Wikipedians are the only thing stopping those marketers from modifying Wikipedia's open-access pages in ways that might degrade the user experience
* Wikipedians, in turn, will fight the marketers because of their pride in the site. However, as marketers become more determined and use automated tools to mount their attacks, Wikipedians will progressively find themselves spending more time combating the marketers.
* The repetitive and unsatisfying nature of these tasks will burn out some Wikipedians, and slowly they will individually decide to invest their time elsewhere.
* As some Wikipedians check out, the remaining Wikipedians will have to pick up the load. With fewer hands, the site will get progressively junkier, which will reduce the pride incentive of the remaining Wikipedians, further accelerating their check-out rate.
* Thus, Wikipedia will enter a death spiral where the rate of junkiness will increase rapidly until the site becomes a wasteland. Alternatively, to prevent this death spiral, Wikipedia will change its core open-access architecture, increasing the database's vitality by changing its mission somewhat.
I'd like to think this prediction is based on my brilliant clairvoyance, but I'm just basing this prediction on the experiences of ODP. I think it's fair to say that (1) in its heyday, the ODP did an amazing job of aggregating free labor to produce a valuable database, and (2) the ODP is now effectively worthless. We're still in the first phase with Wikipedia, but the second phase seems inevitable. So on the 1 year anniversary of my 5 year prediction, I thought it would be a good time to check in on progress.
Wikipedia Relies on a Relatively Small Number of Editors
In theory, Wikipedia draws on the collective wisdom of its readers. Instead, Wikipedia is run principally by a fairly small group of hardcore Wikipedians. In June 2005, Benkler estimated that Wikipedia had 46,000 contributors (contributed 10+ times), 17,000 active contributors (5x in last month) and 3,000 very active contributors (100x in last month). Separately, Jimmy Wales reportedly said that 0.7% of Wikipedia's users have made 50% of all Wikipedia edits and 1.8% of users have written more than 72% of all articles, and he was quoted in the New York Times in June as saying “A lot of people think of Wikipedia as being 10 million people, each adding one sentence...But really the vast majority of work is done by this small core community” of about 1,000 Wikipedians.
Distilling these observations, Ben McConnell posits a 1% rule: "Roughly 1% of your site visitors will create content within a democratized community." At Epinions, our contributor ratio was a little higher, but the principle was roughly consistent--a very small fraction of readers become writers.
Not only is the group small, but it’s organized hierarchically just like any other editorially driven media enterprise. As a New York Times article said, Wikipedia "has built itself a bureaucracy....It has a clear power structure that gives volunteer administrators the authority to exercise editorial control, delete unsuitable articles and protect those that are vulnerable to vandalism." Also see this interview for more details about Wikipedia's various bureaucracies and internal procedures.
Is Wikipedia Really an Open-Access Site?
With this bureaucracy and tight editorial control, arguably Wikipedia already has diverged from a paradigmatic open-access site. As explained by Aaron Swartz, the community becomes somewhat insular and self-focused:
insiders account for the vast majority of the edits. But it's the outsiders who provide nearly all of the content. Unfortunately, precisely because such people are only occasional contributors, their opinions aren't heard by the current Wikipedia process. They don't get involved in policy debates, they don't go to meetups, and they don't hang out with Jimbo Wales. And so things that might help them get pushed on the backburner, assuming they're even proposed.
In my experience, Wikipedians also tend to be suspicious of contributions from outsiders. According to the NYT article, one Wikipedian monitors newly created pages and deletes half of those pages. Plus, in my experience, Wikipedians often revert these contributions and direct the contributor to discuss the change in the page’s discussion area (in other words, the contributor must effectively lobby the page’s guardian about the change's merits). This increases the costs of any contributions, and only a limited number of contributors are willing to make such investments. For those who don't undertake those lobbying efforts, I suspect there is widespread reversion of “drive-by” contributions.
The Incentives Problem
This model of Wikipedia relies on a small number of contributors who fiercely control contributions. Thus, for the model to sustain itself, these contributors must remain motivated—or, at least, the rate of new hardcore Wikipedians joining must exceed the rate of their departures.
However, the Wikipedia model provides little extrinsic benefits for Wikipedians. They don’t get paid, so Wikipedia never will attract large numbers of Wikipedians who can earn significant bucks by selling their time elsewhere. More importantly, Wikipedia doesn’t provide any meaningful attribution for power contributors; they are largely anonymous/faceless. Clearly, many people are willing to contribute time to a project if they get some reputational benefits; indeed, giving content/services away for free as a way of creating awareness of other offerings can be a hugely successful business model in many circumstances. But participating in Wikipedia doesn’t generate these reputational benefits.
So Wikipedia must constantly attract new Wikipedians without being able to offer them either cash or credit. Unquestionably, as we’ve seen, thousands of Wikipedians are willing to provide enormous amounts of their time for intrinsic benefits—the feeling of doing good, the fun of interacting with like minded contributors, the power of contributing to the discourse, etc. But, with the limited incentives, I think it’s going to be hard to recruit large numbers of Wikipedians over time, and it’s going to be even harder to keep power Wikipedians contributing at a consistent rate as they experience ennui or life changes that increase the opportunity cost of their time.
FWIW, contrast this with Epinions. Epinions paid contributors cold, hard cash and offered them a host of reputational benefits. Even so, Epinions has experienced significant turnover. I suspect that a supermajority of Epinions’ power users from 1999 are long gone by now. Nevertheless, the incentives have been enough for the community to replenish itself.
Meanwhile, the Attacks Continue
We continue to see a variety of attacks on Wikipedia. In one amusing attack from July, Stephen Colbert told his viewers to edit the Wikipedia entry on "elephants." Wikipedia had to immediately "semi-protect" the page from the loosely coordinated vandalism.
Further, marketers have awoken to Wikipedia’s potential. Wikipedia is fighting back by threatening to publicly shame marketers through a public blacklist of link spammers, with the further hope that search engines will blacklist these people as well. This is a terrible idea, but the point is that the link spammers are getting to Wikipedians and invoking a response. This trend will only get worse.
I love Wikipedia. I use it every day. Based on the stats from my Google personalized search, Wikipedia is the #1 site I click on from Google search results. So, I'm not rooting for it to fail. But the very architecture of Wikipedia contains the seeds of its own destruction. Without fame or fortune, I don’t think Wikipedia’s incentive system is sustainable. Meanwhile, we can see the very beginning of the attacks that will lead to the death spiral. As a result, I stand by my prediction for 2010.
Nov 2009 update: I have explained my theories in more detail in this new article.
My Own Wikipedia Page
Finally, I should note that in January 2006, a Wikipedian created a Wikipedia page for me due to my prediction. Recently, that page got tagged that "article lacks information on the importance of the subject matter." It was hard not to take this personally…until I realized that at least I wasn’t marked "non-notable" or for deletion (yet). (But my mom took great umbrage at the questioning of her offspring’s importance!) [see this update about my personal page]
UPDATE: I have not been able to verify the authenticity of this September email from Brad Patrick, Wikimedia's in-house lawyer. However, according to the email, Patrick says that "corporate self-editing and vanity page creation...is simply out of hand" and, in response, "when [editors] see new usernames and page creation which are blatantly commercial - shoot on sight." Not only does this reinforce the difficulty that average users have in making contributions, but obviously I'm predicting that the shoot-on-sight policy won't be sufficient to solve the problem.
UPDATE 2: I closed comments because of comment spam. If you look at the existing comments, you see a string on specialized v. general wikis. In response to that, a Wikipedia editor sent me the following email (reposted with permission):
This is not true if the medieval articles and Star Wars article are on entirely separate wikis. There are also network effects to consider - pretty much every article sooner or later needs to link out of its specialized area to more generalist articles. If the separate wikis have interwiki linking set up, it isn't all *that* much harder to link
to articles on other wikis than it is to link to an article on the home wiki, but that's an esoteric and not popularly well known feature, and even at its best still adds friction to the work. - and a host of other benefits.
So basically the only way a specialist wiki can survive or thrive is if it is something Wikipedia refuses to carry. I was involved in the Star Wars area of en Wikipedia articles for a long time, and it did well, until it began to do _too_ well: editors from other areas saw the detail and profusion of articles in the SW area, and began moving
to trim it down drastically and raise standards for the remaining articles. In other words, Wikipedia in a sense decided to stop carrying SW articles. This rejection prompted most of the hardcore SW editors to fork and begin building Wookieepedia
Wikia) using the base of SW articles from Wikipedia. Now their resource is so good that SW editing is mostly dead on Wikipedia, with the exception of a few editors like Deckiller who focus on organizing and removing and improving what's left, and occasionally borrowing articles from Wookieepedia on the notable new or missing stuff.
Similarly, Wikibooks works because the Wikipedias refuse to carry that sort of thing, but people want to work on them anyway.
Wikimedia Commons *doesn't* work so well because you can still upload all the Free images to the Wikipedias quite easily - they haven't decided to force everyone to go away (and go to Commons to upload Free images) and so the Wikipedias are still a force to be reckoned with.
To summarize, if a Wikipedia allows something, the benefits of doing that something on that Wikipedia are so compelling that rival wikis can't really compete and wither. But if a Wikipedia cracks down on something, then they sometimes provide a seed for a new wiki to grow around and also a number of editors who want to work on that seed but are balked within the Wikipedia.
December 03, 2006
"Junk Mail is Alive and Growing"
By Eric Goldman
Many people thought the era of cheap electronic communications would spell doom for junk mail because of the cost advantages of printing and distributing electronic solicitations over dead trees solicitations. But instead, over the past year, marketers sent 114 billion pieces of direct mail--up 15% from five years ago. So what's going on? Some theories:
* Consumers hate telemarketing and spam but are more tolerant of junk mail
* Telemarketing and email laws have driven marketers to less regulated marketing media (a process I call intermedia selection; it's a manifestation of cross-elasticities of demand between marketing media)
* Junk mail isn't subject to the equivalent of email blocklists or filtering.
* Junk mail can be effectively combined with marketing in other media to create an integrated multi-exposure marketing package
* Websites that form relationships with consumers are increasing their offline communication to them
* Junk mail still works. People respond to it--the DMA claims a 2.15% response rate across-the-board, which sounds lousy but is pretty good compared to other marketing media