Trump’s “Preventing Online Censorship” Executive Order Is Pro-Censorship Political Theater

Introduction

We all knew the day would come when the Trump Administration would try to censor the Internet. This was inevitable because of Trump’s dictatorial lust, his antipathy towards independent media sources that can hold him accountable for his actions, and his thin-skinned fragility–which suffered a grievance when Twitter wrist-slapped him (“fact-checked” him) for lying about the risks of vote-by-mail fraud.

It seems counter-intuitive that Trump would want to destroy the Internet. Trump vainly extols the number of his social media followers, especially on Twitter; Twitter has let Trump freely spew propaganda, invective, and outright lies to his constituents; Trump has constantly violated Twitter’s TOS without consequence; and his campaign has spent millions of dollars on Internet ads (4x as much as Biden at Google), and the ads played a huge role in his 2016 election. Given the enormous benefits it derives, you would think the Trump administration would enthusiastically protect the social media industry rather than trying to blow it up.

Nevertheless, in a move that seems contrary to Trump’s long-term interests, yesterday he signed an Executive Order (Executive Order 13925 of May 28, 2020) entitled “Preventing Online Censorship.” [Note: the prior night, a draft of the EO leaked. I prepared a redline showing what changed from that draft to the final draft.] As with other Trumpist attempts targeting Section 230 (e.g., the bills from Sen. Hawley and Rep. Gosar), the titling is the first lie. The EO doesn’t seek to prevent online censorship, it seeks to impose it. Reinterpret the title as “Embracing Online Censorship” and the EO makes so much more sense.

But the EO follows Trump’s modus operandi to advance its goals. In the old days, when a president wanted to change a statute, the president would contact Congress and work collaboratively to build consensus for reform. That is not how Trump does things–that method would require hard thought and work. Instead, Trump-style Section 230 reform is to bloviate a lot, delegate any real work to others, spread that responsibility around so that failure can be pinned on others, and claim victory without doing any work at all. It’s the Tom Sawyer approach to presidenting. #MAGA.

Because the EO talks a lot but has little direct impact, it’s not a serious reform effort. Instead, it is performative– political theater. The real goals are to: (1) keep working the refs at the Internet companies so that they remain afraid to hold Trumpists accountable out of fear that retribution will create an existential threat for them; (2) drum up donors for his campaign by claiming that Trump once again is fighting to protect his voter base from powerful evil institutions that hold them down; and (3) flood the zone and set the media’s agenda to freeze out other adverse news coverage of his conduct–such as the fact that 100,000 Americans have died of COVID-19, and many tens of thousands of Americans would still be alive if the federal government had better managed the crisis response. On the latter point, Trump used his well-worn tactic of publicly attacking our cherished civil liberties, and the media once again prioritized its agenda as if it were a serious threat to our civil liberties. Trump is gifted at steering the media machine, and we all know it, yet we fall for it every single time. So even if the EO never does a damn thing from a legal standpoint, the EO has already succeeded at its real goals. Trump already won.

This overly long, and hastily written, blog post is organized in two layers. The first layer gives you a high-level overview of the bill. If you’re looking for the hot take or some quotes, read that section and call it a day. The second layer does a deep dive that only a Section 230 geek could love; it’s not meant for a mass audience, though you’re welcome to kill some brain cells on it if you want.

High-Level Overview of the EO

The EO has eight sections:

Section 1 contains policy statements. The kind of nonsense prevalent in the Trumpist community’s conversations about Section 230.

Section 2 offers and explains its nonsensical interpretation of Section 230. The section then requires the executive branch to follow this interpretation. It also directs the Commerce Department to ask the FCC to do rulemaking to interpret Section 230.

Section 3 instructs federal agencies to report on their online advertising. It also asks the DOJ to do something with those reports.

Section 4 says it’s the policy of the executive branch that “large” online platforms shouldn’t restrict free speech. It sends the 16,000+ reports generated from last year’s “Tech Bias Reporting” tool to the DOJ and FTC for their perusal. It then encourages the FTC to bring Section 5 enforcement actions against Internet companies for false marketing statements.

Section 5 tells the AG to form a working group of state AGs to investigate how state laws can be used against Internet services; to develop model state legislation; and gather information on specified topics. If it wants, the FTC could also do a report on the 16,000+ reports being delivered to it.

Section 6 tells the AG to draft federal legislation to advance the EO.

Section 7 defines “online platform” as “any website or application that allows users to create and share content or engage in social networking, or any general search engine.”

Section 8 has some boilerplate.

* * *

As I mentioned, the EO is largely performative and not substantive. To understand why, look at the mandatory consequences of the EO. The EO tries to offload much of the dirty work to the DOJ, the FTC, and the FCC. However, all of those agencies are independent and can ignore the requests if they want. Excluding those discretionary requests to entities that may not do anything, here is the complete list of actual things that the EO guarantees will happen:

  • the declarations of policy, in theory, govern federal agency decision-making on issues governed by Section 230. However, if tested in court, those agency decisions will fail because of the EO’s interpretations of Section 230 cannot survive judicial scrutiny. Knowing that, most agencies won’t devote resources in light of the EO’s Section 230 interpretation because they can’t afford to waste their resources.
  • The Commerce Department must tender the request for rule-making to the FCC.
  • Federal agencies need to prepare reports about their online ad spends.
  • Someone has to send the 16,000+ Tech Bias reports to the DOJ, FTC, and any newly constituted working group.

That’s it. In terms of actual substance, DJT Jr. might accurately call the EO a “nothingburger” if he was ever capable of publicly acknowledging his pop’s limitations. Yet, because it pours more gasoline on the fire raging for Section 230 reform, the EO has pernicious implications for the Internet–despite its laziness.

I’ve gotten some questions about preemptive litigation to invalidate the EO. Why would anyone do that? The EO has no immediate consequences that matter. Because of that, I’m not sure who would have standing to sue.

Deeper Dive on the EO

I’ll now do a sentence-level review of the EO. I encourage most of you to stop reading the blog post here. Unless you are a hardcore Section 230 geek (and if you are, I salute you!), you have already gotten over 90% of the value you’re going to get from this blog post.

* * *

Section 1: Policy

This section is propaganda. Some examples:

  • “we cannot allow a limited number of online platforms to hand pick the speech that Americans may access and convey on the internet.” This is typical rhetoric misdirection by assuming facts not in evidence. For example, there are not a “limited number” of online platforms. Based on the EO’s overly expansive definition of that term, there are possibly millions of “online platforms.” Further, the “hand pick” verb is colorful but c’mon. Indeed, the EO mentions the need to gather evidence that algorithms are engaging in viewpoint bias, which wouldn’t be needed if the speech is being “hand picked.”
  • “When large, powerful social media companies censor opinions with which they disagree, they exercise a dangerous power.” When non-governmental publishers decide what content is fit for their audience, that isn’t “censorship,” that’s editorial discretion–and the First Amendment protects that editorial right from government interference. And if we’re concerned about large entities censoring opinions with which they disagree, I nominate our own federal government as a way bigger threat.
  • That paragraph continues: “They cease functioning as passive bulletin boards, and ought to be viewed and treated as content creators.” Huh? As everyone knows (or should know), Section 230 isn’t limited to passive bulletin boards. Also, the Section 230 caselaw has been clear for decades that editorial decisions about publishing third-party content do not constitute “content creation” for Section 230 purposes.
  • “platforms function in many ways as a 21st century equivalent of the public square.” This argument appeals to pro-censorship enthusiasts on all sides of the political spectrum. If social media services are the same as streets and parks, the logical implication is that they should be subject to equivalent legal treatment as state action. That’s not the law, as the PragerU v. YouTube case held, but it has become a popular talking point in both liberal and conservative pro-censorship communities.
  • “Twitter, Facebook, Instagram, and YouTube wield immense, if not unprecedented, power to shape the interpretation of public events; to censor, delete, or disappear information; and to control what people see or do not see.” So much nonsense packed into 34 words. “Unprecedented” power? Try living in a country with only state-controlled media. Can any one service “disappear” information? Do the drafters even understand what that means? And unprecedented power to control what people see/don’t see? Take a look at media consolidation in the 1970s, especially the monopolies held by local newspapers in their metro areas.
  • “As President, I have made clear my commitment to free and open debate on the internet.” 🤣🤣🤣. See, e.g., this recap. And a reminder that a federal appellate court ruled that President Trump engaged in unconstitutional censorship on Twitter (due to this thin-skinned vanity, natch).
  • “[Free and open debate] is essential to sustaining our democracy.” Sure, but here’s the thing. Protecting voter rights is even more essential to sustaining our democracy. Twitter fact-checked Trump precisely because Trump lied to advance voter suppression objectives. So if the Trump administration really cares about sustaining our democracy, the administration should prioritize fixing its own voter suppression efforts.
  • “Online platforms, however, are engaging in selective censorship that is hurting our national discourse.” Pro-tip: every time you hear the word “censorship” describing a non-state actor’s behavior, substitute the words “editorial discretion” and ask the speaker why they chose the misleading euphemism.
  • “Twitter now selectively decides to place a warning label on certain tweets in a manner that clearly reflects political bias.” This was added in the final draft; it wasn’t in the prior draft. So yes, literally this EO and all of the resulting conflagrations are spurred by Trump’s furor over Twitter’s wrist slap. See the Baby Trump balloon on the right, but you probably already had that image in your head.
  • “As recently as last week, Representative Adam Schiff was continuing to mislead his followers by peddling the long-disproved Russian Collusion Hoax, and Twitter did not flag those tweets.” Another late addition to the final draft, and surely added by Trump. In my mind, I visualize Trump making this edit by writing “Shifty Schiff” in Sharpie.
  • “[Twitter’s] officer in charge of so-called ‘Site Integrity’ has flaunted his political bias in his own tweets.” Another addition to the final version. The Twitter’s Site Integrity officer is just part of the team. Even if he “owned” the decision, the “conservative” media is deliberately, intentionally, and unfairly attacking a private person for doing their job–and, incredibly, the federal government is amplifying those shameful attacks. And just a reminder that Americans–especially private citizens–have constitutionally protected rights to express their political views. Yet, this public flogging will almost certainly destroy this gentleman’s life. If we view content moderation as a necessity in our society, we need to ensure that people can perform those vitally important jobs without becoming victims of frenzied and baseless cybermobbing. It’s repulsive that the EO uses the President’s megaphone to echo those attacks. It debases the federal government and the White House as an institution.
  • “several online platforms are profiting from and promoting the aggression and disinformation spread by foreign governments like China.” Mostly this EO is about fucking with Twitter, but why not take a swipe or two at China while you’re at it? I’m surprised Rosie O’Donnell didn’t get a swipe as well.

Side note: an earlier draft capitalized Internet. The final draft has lowercase “internet.” More evidence of the document’s depravity.

Section 2

Section 2(a) declares that “It is the policy of the United States to foster clear ground rules promoting free and open debate on the internet.” What does that even mean? “Ground rules” sounds dangerously close to speech and press restrictions a/k/a government censorship.

The section then pivots to Section 230. Before discussing the EO’s horrible statutory analysis, some Section 230 basics:

  • Section 230 has three operative parts. Section 230(c)(1) says websites aren’t liable for third-party content. Section 230(c)(2)(A) says that websites aren’t liable for good faith filtering of objectionable content. Section 230(c)(2)(B) says online services aren’t liable for providing filtering instructions (like blocklists) to users. There are four statutory exceptions for IP, federal criminal prosecutions, ECPA, and FOSTA.
  • Section 230 doesn’t distinguish between “platforms” and “publishers.” Both qualify for Section 230 protection for third-party content. Section 230 does not evaporate when websites make editorial decisions about publishing third-party content. Instead, it protects those decisions from being the basis of liability. Section 230 isn’t limited to “neutral” services. The whole point of Section 230 was to enable websites to actively manage third-party content as much (or as little) as they want to.
  • Section 230 supplements the First Amendment. Repealing Section 230 would not ensure the liability of online services for third-party content; it would just turn more litigation into Constitutional First Amendment litigation. It drives Section 230 scholar nuts watching people blame Section 230 for outcomes that are really dictated by the First Amendment. That stirs up agitation for Section 230 reform, even though such reform wouldn’t actually change the outcome. For more on why it’s a good thing that Section 230 supplements the First Amendment, see this article.

The EO says that “It is the policy of the United States that the scope of [the Section 230] immunity should be clarified: the immunity should not extend beyond its text and purpose to provide protection for those who purport to provide users a forum for free and open speech, but in reality use their power over a vital means of communication to engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.”

Ugh.

First, the executive branch has no role in “clarifying” Congress’ statutes unless Congress asks it to do so. Otherwise, Congress’ words are “clarified” when the judicial branch resolves litigated cases. So the EO’s “clarification” has no real legal authority.

Second, the clarification that Section 230’s immunity “should not extend beyond its text” is tautological. Of course the statute can’t extend beyond the statute.

Third, the attempted clarification is that Section 230 doesn’t apply when Internet services “purport to provide users a forum for free and open speech, but in reality use their power over a vital means of communication to engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.” If we’re getting all textualist-y, show me where Section 230 says that in its text.

Fourth, this purported restriction is a word salad. What makes something a “vital means of communication”? When does a service “purport to provide…a forum for free and open speech”? What constitutes “deceptive” actions to stifle free and open debate? What does it mean to restrict “certain viewpoints”? Given that every content moderation decision makes winners and losers, when does a content removal decision NOT constitute a restriction on “certain viewpoints”? And I already mentioned that you should replace the word “censorship” with “editorial discretion.” So reread this passage–WTF is it saying?

In defense of this “clarification,” the EO lays out its legal analysis:

subparagraph (c)(2) expressly addresses protections from “civil liability” and specifies that an interactive computer service provider may not be made liable “on account of” its decision in “good faith” to restrict access to content that it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.”  It is the policy of the United States to ensure that, to the maximum extent permissible under the law, this provision is not distorted to provide liability protection for online platforms that — far from acting in “good faith” to remove objectionable content — instead engage in deceptive or pretextual actions (often contrary to their stated terms of service) to stifle viewpoints with which they disagree.  Section 230 was not intended to allow a handful of companies to grow into titans controlling vital avenues for our national discourse under the guise of promoting open forums for debate, and then to provide those behemoths blanket immunity when they use their power to censor content and silence viewpoints that they dislike.  When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct.  It is the policy of the United States that such a provider should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.

Another UGH. Perhaps the best place is to consider the passage’s references to Section 230(c)(2)(A). For reasons I don’t understand, the “conservative” community fetishizes this provision over Section 230(c)(1). This is precisely backwards for anyone familiar with the voluminous Section 230 jurisprudence. (There are 900+ cases citing Section 230 in Lexis’ database). Section 230(c)(2)(A) is rarely litigated any more because its “good faith” element makes it time-consuming and costly for defendants to win. (See my 2012 article for more on (c)(2)(A)). Instead, courts have interpreted Section 230(c)(1) to provide the protection that defendants initially sought under Section 230(c)(2)(A); and Section 230(c)(1) is a cheaper and quicker defense to litigate than (c)(2)(A). Accordingly, Section 230(c)(1) cases represent the vast majority of modern Section 230 cases.

So the legal analysis exposes the drafters’ seeming ignorance of the Section 230 jurisprudence. By declaring a “policy” curbing the applicability of Section 230(c)(2)(A) only, the drafters actually don’t address the vast majority of cases where Section 230 is invoked to defend removal/restriction decisions. (We’ll come back to this point with the FCC discussion). Even if the policy were legally effective (which it’s not), it metaphorically is a “swing-and-a-miss” at redressing the targeted problem. Oops.

Even within the limited Section 230(c)(2)(A) jurisprudence, the legal analysis misapprehends the interplay between the “good faith” requirement, which courts have occasionally treated as an objective standard, and the protection for blocking material that is “objectionable,” which courts generally have interpreted using a subjective standard. So long as the Internet services are free to conclude that “certain viewpoints” are subjectively objectionable, then the “good faith” piece doesn’t meaningfully restrict their behavior. If so, the declared “policy” doesn’t fix its purported problems either. (More on that with the FCC discussion too).

While the passage only references Section 230(c)(2)(A), some of its language flatly contradicts Section 230(c)(1) jurisprudence. For example, this sentence–“When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct”–is wrong. The seminal 1997 Zeran v. AOL case said:

lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions-such as deciding whether to publish, withdraw, postpone or alter content-are barred (emphasis added)

This Zeran passage has been echoed by dozens of cases since then. Who are future courts going to find more persuasive? The EO’s legal analysis or decades of caselaw?

Finally, the fact that a defendant may be a “titan” or “behemoth” has no actual legal effect on Section 230 jurisprudence. Those references are purely performative for Trump’s voter base.

Section 2(b) then directs the federal government to embrace these policies:

To advance the policy described in subsection (a) of this section, all executive departments and agencies should ensure that their application of section 230(c) properly reflects the narrow purpose of the section and take all appropriate actions in this regard.

What actions are those? If an executive agency brings an enforcement action predicated on a misreading of Section 230, it will lose in court–costing the taxpayers and defendants lots of money. Are there other actions that agencies can take? If so, those too can be challenged in court as violating Congress’ law. Because I don’t see why any agencies will behave differently, the EO’s exhortation likely has no practical effect.

Section 2(b) also instructs the Secretary of Commerce to direct the NTIA to petition the FCC to initiate a rule-making to “clarify” the following subjects:

(i) the interaction between subparagraphs (c)(1) and (c)(2) of section 230, in particular to clarify and determine the circumstances under which a provider of an interactive computer service that restricts access to content in a manner not specifically protected by subparagraph (c)(2)(A) may also not be able to claim protection under subparagraph (c)(1), which merely states that a provider shall not be treated as a publisher or speaker for making third-party content available and does not address the provider’s responsibility for its own editorial decisions;

(ii) the conditions under which an action restricting access to or availability of material is not “taken in good faith” within the meaning of subparagraph (c)(2)(A) of section 230, particularly whether actions can be “taken in good faith” if they are:

(A) deceptive, pretextual, or inconsistent with a provider’s terms of service; or

(B) taken after failing to provide adequate notice, reasoned explanation, or a meaningful opportunity to be heard; and

(iii) any other proposed regulations that the NTIA concludes may be appropriate to advance the policy described in subsection (a) of this section.

First, the FCC doesn’t have to honor the rule-making request. It’s their call.

Second, the FCC has no jurisdiction here. Congress hasn’t delegated it rule-making authority on Section 230. They could try to manufacture some other basis for jurisdiction, but that would be challenged in court. The 1997 Reno v. ACLU court said that the Internet, unlike broadcasting, is entitled to the highest level of First Amendment protection. Thus, the FCC cannot use any broadcaster-based justifications for regulation to extend its reach to the Internet. If the FCC has no jurisdiction, then what legal consequences follow from the outputs of any rule-making procedure?

Third, “clarifying” statutes is what courts do. In the old days, when an executive branch wanted to challenge the interpretation of a statute, it would pick a favorable test case and run it through the court system. Here, the Trump administration is trying to skip that step by running to the FCC instead. It won’t work.

The first point of the desired rule-making targets the Section 230(c)(1)/Section 230(c)(2)(A) divide I mentioned above. Unless there’s already been a brokered deal with FCC commissioners, why the EO drafters think they can sell that interpretation at the FCC is unclear. There is a lot of caselaw to the contrary, so either the FCC has to ignore that precedent or it can’t reach the desired conclusion.

Interestingly, the second point seeks to reinterpret Section 230(c)(2)(A) “good faith” requirement to include some elements of procedural due process. This proposition is also inconsistent with the precedent. See, e.g., Holomaxx Technologies v. Microsoft Corp., 783 F. Supp. 2d 1097 (N.D. Cal. 2011) (“Nor does Holomaxx cite any legal authority for its claim that Microsoft has a duty to discuss in detail its reasons for blocking Holomaxx’s communications or to provide a remedy for such blocking. Indeed, imposing such a duty would be inconsistent with the intent of Congress to ‘remove disincentives for the development and utilization of blocking and filtering technologies.’”)

Section 3

Section 3 requires executive branch agencies to gather information about their online ad spend and submit a report to the OMB Director in 30 days. Presumably the OMB director will issue further instructions after receiving and analyzing the reports, but that’s not required by the EO.

Section 3 also says:

The Department of Justice shall review the viewpoint-based speech restrictions imposed by each online platform identified in the report described in subsection (b) of this section and assess whether any online platforms are problematic vehicles for government speech due to viewpoint discrimination, deception to consumers, or other bad practices.

I have no idea what this means. This provision was added to the final draft. That makes me think AG Barr has already agreed to do something specific. I assume the DOJ will tread cautiously here, as any discriminatory allocation of ad dollars could spur litigation.

Of course, the real test of the sincerity of this provision is whether the Trump campaign will allocate its ad dollars consistent with the results of the DOJ’s review. Given the dollars they are spending on Google and Facebook, that would be quite a change in the campaign strategy. But how could the Trump campaign, in good conscience, financially support any site that its own DOJ determines is committing “viewpoint discrimination, deception to consumers, or other bad practices”?

Furthermore, if a site isn’t worthy of “government speech,” then the DOJ’s findings should apply to both paid and organic government speech. That means the findings would extend to government-operated social media accounts. Thus, if the DOJ decides in its review that Twitter isn’t worthy of “government speech,” the DOJ would then instruct Trump to shut down his Twitter account (the Second Circuit has confirmed that @realdonaldtrump is government speech). It’s clear this provision was drafted in haste, but I prefer to think that the drafters are playing some sophisticated 4D chess and not stupidly laying the foundation for self-sabotage.

Section 4

Section 4(a) declares: “It is the policy of the United States that large online platforms, such as Twitter and Facebook, as the critical means of promoting the free flow of speech and ideas today, should not restrict protected speech.” This sentence changed in the final draft. In the prior draft, the sentence referenced the public forum doctrine. That was baseless because the Ninth Circuit’s ruling in PragerU v. Google said that social media services are not state actors (and it expressly rejected analogies to the Packingham and Pruneyard cases, which the EO makes anyways). As reworded, the sentence has become nonsensical. On what basis can the government tell private publishers what content they must or can’t publish? That would be a straight-out First Amendment violation. So no one could act on this policy without violating the First Amendment. The paragraph tries to weaponize the Knight v. Trump ruling by saying “Communication through these channels has become important for meaningful participation in American democracy, including to petition elected leaders,” but the Knight v. Trump ruling only applied to how the government operates its social media accounts. So this policy is nonsense.

Section 4(b) directs the White House to send the 16,000+ reports received from last year’s “Tech Bias Reporting” tool to the DOJ and FTC. I’m sure they are eagerly awaiting the delivery. Note that those reports were unverified and likely flooded by Breitbart readers and possibly Russian trollbots, so odds are good that the dataset is garbage.

Section 4(c) asks the FTC to “consider taking action,” using its existing Section 5 authority to enforce deceptive and unfair trade practices, against “entities covered by section 230 that restrict speech in ways that do not align with those entities’ public representations about those practices.” The FTC already has this enforcement authority irrespective of the EO, and they already are well aware of how many people on the left and the right would love them to pursue the Internet giants.

Section 4(d) asks for two more favors from the FTC. First, it asks:

For large online platforms that are vast arenas for public debate, including the social media platform Twitter, the FTC shall also, consistent with its legal authority, consider whether complaints allege violations of law that implicate the [the policy that large online platforms, such as Twitter and Facebook, as the critical means of promoting the free flow of speech and ideas today, should not restrict protected speech]

Calling out Twitter as a target looks pretty much like naked government censorship. Otherwise, the target set (“large Internet platforms that are vast arenas for public debate”) isn’t defined or clear.

The other Section 4(d) favor is to ask the FTC to consider preparing a report on the 16,000+ claimed bias reports the White House is sending over. I don’t expect the FTC to put that request very high on its priority list.

Section 5 instructs the AG to assemble a working group to evaluate “the potential enforcement of State statutes that prohibit online platforms from engaging in unfair or deceptive acts or practices.” This is a bizarre request because the DOJ ordinarily can’t enforce state statutes, so they don’t have much expertise on this topic. The provision says the working group should discuss and consult with the state AGs, but I don’t see how the working group can produce anything credible without their full participation. The working group will get the 16,000+ claimed bias reports to make sure more taxpayer-funded eyeballs waste time looking at them.

Section 5 also instructs the working group to “develop model legislation for consideration by legislatures in States where existing statutes do not protect Americans from such unfair and deceptive acts and practices.” There are already model state statutes governing unfair and deceptive trade practices, which are not uniformly adopted at all. I’m sure state legislatures will welcome a new contribution to that canon.

Section 5 further tells the working group to gather public data on:

(i) increased scrutiny of users based on the other users they choose to follow, or their interactions with other users;

(ii) algorithms to suppress content or users based on indications of political alignment or viewpoint;

(iii) differential policies allowing for otherwise impermissible behavior, when committed by accounts associated with the Chinese Communist Party or other anti-democratic associations or governments;

(iv) reliance on third-party entities, including contractors, media organizations, and individuals, with indicia of bias to review content; and

(v) acts that limit the ability of users with particular viewpoints to earn money on the platform compared with other users similarly situated.

The randomness and conspiracy-fueled underpinnings of this list make me wonder if someone at Breitbart drafted it. Like, reread (iii) and try not to giggle.

Section 6

Section 6 says: “The Attorney General shall develop a proposal for Federal legislation that would be useful to promote the policy objectives of this order.” Based on the DOJ Section 230 roundtable from February, I assumed they were already doing this. It’s a little hard to imagine that the DOJ has anti-Section 230 things on its wishlist that aren’t already part of the EARN IT Act, but they are a clever bunch. Any proposed legislation that may emerge from the DOJ will get serious attention.

Section 7

Section 7 defines “online platform” as “any website or application that allows users to create and share content or engage in social networking, or any general search engine.” I’ve repeatedly lamented regulatory efforts to distinguish “social media” from the rest of the Internet, and this definition highlights the problem. It capaciously includes every UGC site, big or small, and it covers many services that wouldn’t normally be considered UGC but nevertheless provide some mechanism for users to “create and share content.” For example, this picks up every online retailer that has a UGC/review component; it could even cover a fintech company like Venmo that allows for descriptions of transactions. Can you think of any Internet services you use regularly that wouldn’t meet this definition of “online platform”?

Namecheck Note

The following companies are referenced by name

  • Twitter: 6x
  • Facebook: 2x
  • YouTube and Instagram: 1x each

Concluding Observations

The DOJ’s Role. The EO makes a lot of asks of the DOJ. I call them asks because I believe the DOJ can decline White House requests. Does anyone believe that will actually happen? AG Barr has repeatedly proven his fealty as Trump’s personal lawyer. As a result, I’m sure Barr will do whatever his overlord asks of him. As evidence of that, several DOJ-specific provisions were added in the final draft (Section 6 is one example). That would make sense if the White House and the DOJ pre-brokered the deal.

Did Twitter Make a Mistake Fact-Checking Trump? Twitter, and all other social media companies, are stuck in a no-win situation. If the services rebuke politicians, the politicians will use their positions of power to punish the services–exactly what Trump did here. However, if the services don’t rebuke politicians, like Facebook’s policy, then politicians will unrelentingly lie to their constituents nonstop without consequence. It turns out that social media is a wonderful tool for propaganda by government actors because of the direct unfiltered line to constituents. So how should social media services navigate this situation with downsides in every direction? As WOPR concluded, the “only winning move is not to play.” If you can’t police them and you can’t stop them from lying, the least worst option for all social media services would be to dump all accounts by politicians or political candidates and exit that industry segment.

Section 230 Is Doomed. Though the EO doesn’t doom Section 230, we should not discount Section 230’s extremely perilous state. Everyone is shitting on Section 230:

  • Both President Trump and challenger Biden have called to repeal it. (Biden’s exact words: “Section 230 should be revoked, immediately.” Trump’s exact words: “REVOKE 230!”). This means we’re guaranteed the President of the US in January 2021 will want to revoke Section 230, regardless of the election results.
  • The DOJ held a day-long event focused on strategizing ways to gut it.
  • Dozens of members of Congress have, in bipartisan unanimity, publicly spoken out against Section 230.

It’s devastating how many of these Section 230 disparagements are flat-out wrong–predicated on false statements like the publisher/platform distinction; or ignoring the First Amendment backstop that will apply even in Section 230’s absence; or taking for granted all of the wonderful 230-protected services that enabled our society to keep functioning during the COVID-19 shutdowns. Yet, the cumulative effect of these criticisms put Section 230 in extreme jeopardy. How can Section 230 survive the attacks on all sides from some of the most powerful people and institutions in the world?

Note Any Enablers. The EO is a dumpster fire, yet some folks will publicly applaud it anyways. Keep track of those folks and carefully scrutinize their past and future credibility.

Additional Reading

If a 5,800 word blog post wasn’t enough reading for you, here’s more: