Interview About Section 230 and COVID Misinformation

[I did another interview with Mathew Ingram at Galley by CJR]

Ingram: Eric, thanks very much for doing this. I know we’ve discussed Section 230 before on Galley, so I don’t want to go over old ground, but is there anything about the current situation that is different or has changed your view on Section 230 and its benefits or disadvantages? Does the proposed Klobuchar bill have any merit do you think?

Goldman: Thanks for having me again. The Section 230 landscape remains about the same as 2020. Most of the Section 230 reform bills are messaging bills designed to play to the crowd and maybe goose campaign donations. As for their policy “merits,” most of the proposed bills would predictably lead to terrible outcomes that would undermine or eliminate the things that people love most about the Internet.

Unfortunately, the bill from Sens. Klobuchar and Lujan is another example of that. Health misinformation may not create any legal liability because it may be constitutionally protected. If so, removing the liability shield in Section 230 won’t change the volume or visibility of health misinformation (but it would mess up Section 230). Worse, the bill basically designates the HHS Secretary as a one-person censorship board. That’s clearly unconstitutional. Thus, like virtually all of the other Section 230 reform bills, this bill isn’t a serious policy proposal. It’s another PR stunt, and an unfortunate one at that.

As I’ve said many times, when we really drill into the problems that legislators claim they want to fix, invariably Section 230 is the solution to that problem. not the problem itself. Thus, reforming Section 230 would almost certainly remove the “best”/”least worst” option and force us to adopt inferior alternative solutions.

Ingram: Thanks, Eric. I’ve asked others this question, so I’d like to ask you as well: Is there any legal justification, or could there be any, even theoretically, for making Facebook or Google liable for harms caused by health disinformation or hate speech etc. solely because their algorithms amplified that speech or information? Is there any world in which that might work?

Goldman: No. “Publication” and “amplification” are the same thing. Any amplification decisions are publication decisions entitled to equal constitutional protection. I don’t see this as a close question legally.

More importantly, the idea would lead to terrible policy results. What, exactly, is the legislators’ end game here? By creating major legal disincentives to “amplifying” content, they seem to encourage Facebook or Google to passively host web content and provide reduced social features. But web hosting is a commodity; social media users can easily find venues to passively host their content. What makes social media useful and special is, well, the social piece–which necessarily involves prioritizing some content over others. Facebook does purport to allow its users to see their friends’ content chronologically, but that feature is uninteresting to people with hundreds of friends who want to sift for the most interesting items and avoid a massive volume of uninteresting posts.

Perhaps the legislators naively think that Internet services will keep their existing social tools but magically isolate and screen out “health misinformation” and “hate speech” from their massive corpus of content. That’s a delusional fantasy among legislators because no one can scalably make those determinations. Instead, Internet services absolutely will change their behavior in response to changes in the liability scheme, not necessarily the way legislators hope.

Finally, I’ll note an irony in the Klobuchar/Lujan bill. If the bill drives Internet services away from amplification and into becoming passive web hosts, this actually mirrors the proposals that are coming from the hardest-core conservatives who want Internet services to make no distinctions among content. If I were a politically moderate legislator, it would horrify me to think that my proposal is functionally indistinguishable from the burn-down-the-Internet proposals coming from the political cranks.

Ingram: Got it — thanks. So is the kind of disinformation-at-scale that we’ve seen in recent years, whether it’s about COVID or the election, just the way things are now? Is there any legal remedy for the kind of harms that some believe Facebook and Google and Twitter are causing by amplifying — in some cases deliberately — this kind of content? Or is that a fool’s errand?

Goldman: At this point, the dissemination pathway for disinformation is quite clear:

Politicians lie => cable broadcasters like Fox News amplify the lie => social media discusses the lie and extends the amplification.

Focusing on the social media activity without fixing the first two predicate steps won’t solve anything. So long as our politicians keep lying to us and Fox News keeps amplifying it, we’re still screwed no matter what happens on social media.

Worse, if the Constitution protects our politicians’ lies and Fox News’ amplification of them, then we may lack the legal tools necessary to systemically stop the lies. If so, then we need to rethink our policy approaches to the entire information ecosystem.

Indeed, this is a good example of how Section 230 might be the solution, not the problem. If the government can’t ban disinformation, but Internet services can–and do–undertake the socially valuable work of curbing disinformation with the protection of Section 230, we get better results than if Internet services are inhibited from doing this work without any protection from Section 230.

That’s one of many reasons why the various “must-carry” laws, like Florida’s social media censorship law, are so pernicious. They would sideline Internet services in the fight against disinformation, when Internet services may be our most crucial line of defense today.

Ingram: Thanks. I wanted to ask about another side to the anti-Section 230 fight, namely Donald Trump’s lawsuits against the platforms — which are based on the idea that the major social networks shouldn’t take certain things down and are doing so, rather than the idea that they should take things down and aren’t. If I’m reading the suits correctly, part of the argument is that the platforms are acting the way they are because they have been pressured by Democratic legislators, which amounts to a First Amendment violation. Does that hold any water at all do you think?

Goldman: No. There is a big difference between the government asking Internet services to think more about the content they publish–which happens all the time–and the government compelling Internet services to take censorial actions, which I have yet to see the Democrats do. Then again, no U.S. government official in the last 5 years did more to coerce Internet services to change their publication decisions than Trump did as president, so Trump’s legal position is characteristically ironic in a not-funny way.

Ingram: Thanks, Eric. We are just about out of time, but maybe just one last question if you don’t mind. Do you think there is anything wrong with Section 230 that needs to be tweaked or adjusted, or is it still accomplishing what it was designed to do with minimal downside?

Goldman: Content “governance” necessarily involves a series of complex tradeoffs, so it’s rare or impossible for any policy (or change to that policy) to benefit everyone with no downsides. Thus, it’s not fair to ask if Section 230 has “minimal downsides” because every content governance policy has unavoidable major downsides. As I mentioned at the beginning, Section 230 may be the “least worst” policy approach because it strikes a balance between the competing tradeoffs in a way that has unlocked an enormous amount of social good and has helped the Internet services achieve better outcomes than the government might have been able to achieve given its constitutional constraints.