2H 2018 Quick Links, Part 7 (Content Moderation, Section 230, & More)

[ugh, somehow this got lost in my drafts folder. Sharing it now…]

Vice: “The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People.” If you read only one article on content moderation, choose this one. Things I learned included: Facebook has a policy for images depicting celebrities photoshopped with anus mouths/eyes.

* New York Times: Inside Twitter’s Struggle Over What Gets Banned

* Washington Post: Jack Dorsey says he’s rethinking the core of how Twitter works

* Vice: Internal Documents Show How Facebook Decides When a Poop Emoji Is Hate Speech

* Ars Technica: “The Microsoft Azure cloud computing service threatened to stop hosting Gab, a self-described “free speech social network,” unless the site deleted two anti-Semitic posts made by a neo-Nazi who previously ran for a US Senate seat.”

* NY Times: Patreon Bars Anti-Feminist for Racist Speech, Inciting Revolt

* WSJ: ” Many current and former Facebook insiders argue that the company’s desire to avoid criticism from conservatives prevents it from fully tackling broader issues on the platform.”

* Daphne Keller,  Internet Platforms Observations on Speech, Danger, and Money:

– “platforms really are making both kinds of mistakes. By almost anyone’s standards, they are sometimes removing too much speech, and sometimes too little.”

– “The cost-benefit analysis behind CVE campaigns holds that we must accept certain downsides because the upside—preventing terrorist attacks—is so crucial. I will argue that the upsides of these campaigns are unclear at best, and their downsides are significant. Over-removal drives extremists into echo chambers in darker corners of the internet, chills important public conversations, and may silence moderate voices. It also builds mistrust and anger among entire communities. Platforms straining to go “faster and further” in taking down Islamist extremist content in particular will systematically and unfairly burden innocent internet users who happened to be speaking Arabic, discussing Middle Eastern politics, or talking about Islam. Such policies add fuel to existing frustrations with governments that enforce these policies, or platforms that appear to act as state proxies. Lawmakers engaged in serious calculations about ways to counter real-world violence—not just online speech—need to factor in these unintended consequences if they are to set wise policies.”

– “Neither platforms nor lawmakers can throw a switch and halt the flow of particular kinds of speech or content. Artificial intelligence and technical filters cannot do it either—not without substantial collateral damage. Delegating speech law enforcement to private platforms has costs. Lawmakers need to understand them, and plan accordingly, when deciding when and how the law tells platforms to take action.”

– “By imposing costs on individuals and communities well beyond actual extremists, CVE efforts can reinforce the very problems they were meant to correct. Feelings of alienation and social exclusion are, security researchers say, important risk factors for radicalization, as are frustration and moral outrage. Knowing this, yet accepting aggressive CVE campaigns’ likely impact, may be a serious miscalculation. If suppressing propaganda from real terrorists comes at the cost of high over-removal rates for innocent Arabic-language posts or speech about Islam generally, the trade-off may be not only disrespectful and unfair but dangerous.”

* Slashdot: How Do Americans Define Online Harassment?

* Pew: 79% of Americans feel that online services have a responsibility to step in when harassing behavior occurs on their platforms.  Perhaps counter-intuitively, this effort to fight harassment provides another reason why attempts to limit online services’ editorial discretion is a bad idea.

* Amac: Recent Podcasts & Articles on Content Moderation

* CNN: Complaints prompt Amazon to remove products that are offensive to Muslims

Election

* New Yorker: Can Mark Zuckerberg Fix Facebook Before It Breaks Democracy?

* It was a huge failure that Facebook & other social media services didn’t anticipate Russian election hacking. But I’m sure they are fully invested in fixing those mistakes now.

* NY Times: The Plot to Subvert an Election Unraveling the Russia Story So Far

* New Yorker: How Russia Helped Swing the Election for Trump

* Buzzfeed: American Conservatives Played A Secret Role In The Macedonian Fake News Boom Ahead Of 2016

* The Atlantic: The Grim Conclusions of the Largest-Ever Study of Fake News

* Nieman Lab: Republicans who follow liberal Twitter bots actually become more conservative

Facebook

* Buzzfeed: How Duterte Used Facebook To Fuel the Philippine Drug War

* Kash Hill: ‘People You May Know:’ A Controversial Facebook Feature’s 10-Year History

* Buzzfeed, The Rise, Lean, And Fall Of Facebook’s Sheryl Sandberg

* Wired: The 21 (and Counting) Biggest Facebook Scandals of 2018

* If 2 billion users implicitly value Facebook services at an average of $1,000/yr, Facebook produces $2+ trillion a year of consumer surplus.

Section 230

* Weimer v. Google, Inc., 2018 WL 5278707 (D. Mont. Oct. 24, 2018): “47 U.S.C. § 230 insulates Google and Microsoft from liability. That statute provides that “[n]o provider … of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 47 U.S.C. § 230(c)(1). Accordingly, Google and Microsoft cannot be held liable to Weimer for the publication of pornographic materials by third parties. Weimer further argues that 47 U.S.C. §§ 206 and 207 authorize his claims against the Federal Defendants, but, as discussed above, these provisions clearly do not apply to federal agencies.”

* The Verge: Sen. Ron Wyden on Breaking Up Facebook, Net Neutrality, and the Law That Built the Internet

* Engine/CKI:  NUTS & BOLTS of USER-GENERATED CONTENT The International Intermediary Liability Framework

* MediaPost Digital News Daily: Trump Attacks Google News Results, Search Experts Debunk Bias Claim