Yale Reputation Economies Symposium Recap
By Eric Goldman
Reputation is a hot topic in Cyberlaw circles, so the Yale ISP conference on Reputation Economies in Cyberspace came at a propitious time. Some of my meta-observations from the talks.
1) We lack a uniformly accepted definition of reputation. During the conference, it was clear that most speakers were working with their own idiosyncratic definitions. Without a standardized definition, people can easily talk past each other.
2) Reputational systems are everywhere–FICO scores, letters of recommendation, Google PageRank, product review sites like Epinions, spam filters, employee evaluations, etc. I plan to catalog them in my next big paper. For now, Jonathan Zittrain gave two interesting examples: (1) British pubs are now taking patrons’ fingerprints and publishing a blacklist of rowdy pubgoers to other pubs, and (2) websites allow angry drivers to criticize bad drivers by license plate number.
3) We often treat reputation as a monolithic assessment (good or bad), but it is granular and contextual. Reputation systems need to reflect these nuances, and we’re seeing movement in that direction. For example, eBay is considering more granular feedback scores, which might entail different scores for product description accuracy and shipping speediness. However, increased granularity is subject to the accuracy/simplicity tradeoff—increased complexity improves accuracy but makes it more costly to participate in the system.
To overcome the accuracy/simplicity tradeoff and reduce collection costs, reputational data can be collected automatically. Bill McGeveran compared Facebook’s automatic collection of recommendations through Beacon with ratemyprofessor.com (a site I’ve critiqued before–1, 2, 3), where the communication costs discourage students from providing feedback unless they hold extreme views (i.e., love it/hate it).
Jonathan Zittrain suggested that people should be able to request that some information should not become part of their reputation. He gave robots.txt as an analogy; it is a voluntary standard that web publishers can use to keep content (that might have reputational implications) out of the search engines, which in turn significantly reduces its visibility. Although robots.txt is voluntary, it is widely followed. Jonathan thinks a similar voluntary system might be helpful for reputational data.
4) As noted by several speakers, reputation has economic value that can be converted into cash. For example, spammers have better delivery success—and thus make more money—if they can work with a high-reputation email address that is less likely to be blocked/filtered, and an seller with high feedback commands premium prices for his/her auctions. These payoffs create incentives for “bad guys” to capitalize on undeserved reputation, leading to the hijacking of high-feedback accounts and feedback-inflating activity (such the serial consummation of penny auctions) that can be used for a short but intense burst of fraud.
Bill McGeveran gave Facebook’s Beacon as another example of reputation’s selling power. In that case, Facebook and marketes are engaged in “reputational piggybacking” to get extra credit from the “recommending” user’s validation.
Because reputation has economic payoffs, we are tempted to provide property-like protections for reputation. Trademark law is an example of this in the commercial context. In contrast, with respect to individuals, damaged reputations can have significant non-economic harms that are not well-handled through property systems. Discussions about legal protection for reputations can get confusing when economic protectionism are conflated with these non-economic harms.
5) No reputational system will be perfectly accurate. Any system will have Type I and Type II errors. So how accurate must a reputational system be for it to be credible? We should assess this question by comparing a reputational system’s errors against the errors from alternative systems (or the absence of the system altogether).
A reputational system might be improved through more robust error correction mechanisms. Jonathan Zittrain gave the example of the Google News feature that allows a quoted individual to add comments right below the article. This reminded me a lot of Frank Pasquale’s asterisk proposal.
6) Reputational information is time-sensitive in that more recent reputational information is more useful to assessing reputation. Jonathan Zittrain proposed a concept of “reputational bankruptcy” where “old” information could be permanently suppressed because it is not useful to make future assessments. He analogized this to the time-based degrading of eBay’s feedback score, which segregates transactional information by date (i.e., 1 month, 6 month, all time).
More resources on this topic:
* the collection of position papers from the event