Supreme Court Leaves 230 Alone For Now, But Justice Thomas Gives A Pretty Good Explanation For Why It Exists In The First Place

Breathe.

Supreme Court Holds Investiture Ceremony For Associate Justice Ketanji Brown Jackson

(Photo by Collection of the Supreme Court of the United States via Getty Images)

Our long national wait for how the Supreme Court would rule regarding Section 230 is over, and the answer is… we need to keep waiting. The headlines note, correctly, that the court punted the matter. But there are other elements of the actual rulings that are kind of interesting and could bode well for the future of the internet and Section 230.

As you’ll likely recall, back in October, the Supreme Court surprised a lot of people by taking two sorta related cases regarding the liability of social media sites, Gonzalez v. Google, and Twitter v. Taamneh. Even though both cases were ruled on by the 9th Circuit in the same opinion, and had nearly identical fact patterns (terrorists did an attack overseas, family of a victim sued social media to try to hold them liable for the attacks because social media allowed terrorist organizations to have accounts on social media), only one (Gonzalez) technically dealt with Section 230. For unclear reasons, even though there was some discussion of 230 in the Taamneh case, the ruling was more specifically about whether or not Twitter was liable for violating JASTA (the Justice Against Sponsors of Terrorism Act).

Both cases sought cert from the Supreme Court, but again in an odd way. The family in Gonzalez challenged the 9th Circuit’s ruling that their case was precluded by Section 230, but kept changing the actual question they were asking the Supreme Court to weigh in on, bouncing around from whether recommendations took you out of 230, to whether algorithms took you out of 230, to (finally) whether the creation of thumbnail images (?!?!?!?) took you out of 230. For Taamneh, Twitter sought conditional cert, basically saying that if the court was going to take Gonzalez, it should also take Taamneh. And that’s what the court did. Though I’m still a bit confused that they held separate oral arguments for both cases (on consecutive days) rather than combining the two cases entirely.

And the end result suggests that the Supreme Court is equally confused why it didn’t combine the cases. And also, why it took these cases in the first place.

Indeed, the fact that these rulings came out in May is almost noteworthy on its own. Most people expected that, like most “big” or “challenging” cases, these would wait until the very end of the term in June.

Either way, the final result is a detailed ruling in Taamneh by Justice Clarence Thomas, which came out 9 to 0, and a per curiam (whole court, no one named) three pager in Gonzalez that basically says “based on our ruling in Taamneh, there’s no underlying cause of action in Gonazalez, and therefore, we don’t have to even touch the Section 230 issue.”

Sponsored

The general tenor of the response from lots of people is…. “phew, Section 230 is saved, at least for now.” And that’s not wrong. But I do think there’s more to this than just that. While the ruling(s) don’t directly address Section 230, I’m somewhat amazed at how much of Thomas’s ruling in Taamneh, talking about common law aiding and abetting, basically lays out all of the reasons why Section 230 exists: to avoid applying secondary liability to third parties who aren’t actively engaged in knowingly trying to help someone violate the law.

Much of the ruling goes through the nature of common law aiding and abetting, and what factors are conditions are necessary to find a third party liable, and basically says the standards are high. It can’t be mere negligence or recklessness. And Justice Thomas recognizes that if you make secondary liability too broad it will sweep in all sorts of innocent bystanders.

Importantly, the concept of “helping” in the commission of a crime—or a tort—has never been boundless. That is because, if it were, aiding-and-abetting liability could sweep in innocent bystanders as well as those who gave only tangential assistance. For example, assume that any assistance of any kind were sufficient to create liability. If that were the case, then anyone who passively watched a robbery could be said to commit aiding and abetting by failing to call the police. Yet, our legal system generally does not impose liability for mere omissions, inactions, or nonfeasance; although inaction can be culpable in the face of some independent duty to act, the law does not impose a generalized duty to rescue.

The crux then:

For these reasons, courts have long recognized the need to cabin aiding-and-abetting liability to cases of truly culpable conduct. They have cautioned, for example, that not “all those present at the commission of a trespass are liable as principals” merely because they “make no opposition or manifest no disapprobation of the wrongful” acts of another.

Sponsored

Those statements are actually the core of why 230 exists in the first place: so that we put the liability on the party who actively and knowingly participated in the violative activity. Thomas spends multiple pages explaining why this general principle makes a lot of sense, which is nice to hear. Again, Thomas concludes this section by reinforcing this important point:

The phrase “aids and abets” in §2333(d)(2), as elsewhere, refers to a conscious, voluntary, and culpable participation in another’s wrongdoing.

If that language sounds vaguely familiar, that’s because it’s kind of like the language the 9th Circuit used in saying that Reddit didn’t violate FOSTA last fall, because it wasn’t making deliberate actions to aid trafficking.

Having established that basic, sensible, framework, Thomas moves on to apply it to the specifics of Taamneh, and finds it clear that there’s no way the plaintiffs have shown that social media did anything that gets anywhere within the same zip code as what’s required for aiding and abetting. Because all they did was create a platform that anyone could use.

None of those allegations suggest that defendants culpably “associate[d themselves] with” the Reina attack, “participate[d] in it as something that [they] wishe[d] to bring about,” or sought “by [their] action to make it succeed.” Nye & Nissen, 336 U. S., at 619 (internal quotation marks ommitted). In part, that is because the only affirmative “conduct” defendants allegedly undertook was creating their platforms and setting up their algorithms to display content relevant to user inputs and user history. Plaintiffs never allege that, after defendants established their platforms, they gave ISIS any special treatment or words of encouragement. Nor is there reason to think that defendants selected or took any action at all with respect to ISIS’ content (except, perhaps, blocking some of it).13 Indeed, there is not even reason to think that defendants carefully screened any content before allowing users to upload it onto their platforms. If anything, the opposite is true: By plaintiffs’ own allegations, these platforms appear to transmit most content without inspecting it.

From there, he notes that just because a platform can be used for bad things, it doesn’t make sense to hold the tool liable, again effectively making the argument for why 230 exists:

The mere creation of those platforms, however, is not culpable. To be sure, it might be that bad actors like ISIS are able to use platforms like defendants’ for illegal—and sometimes terrible—ends. But the same could be said of cell phones, email, or the internet generally. Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large. Nor do we think that such providers would normally be described as aiding and abetting, for example, illegal drug deals brokered over cell phones—even if the provider’s conference-call or video-call features made the sale easier.

I’ve seen some people raise concerns that the language in the above paragraph opens up an avenue for SCOTUS to pull a “social media is a common carrier, and therefore we can force them to host all speech” but I’m not sure I actually see that in the language at all. Generally speaking, email and “the internet generally” are not seen as common carriers, so I don’t see this statement as being a “social media is a common carrier” argument. Rather it’s a recognition that this principle is clear, obvious, and uncontroversial: you don’t hold a platform liable for the speech of its users.

From there, Thomas also completely shuts down the argument that “algorithmic recommendations” magically change the nature of liability:

To be sure, plaintiffs assert that defendants’ “recommendation” algorithms go beyond passive aid and constitute active, substantial assistance. We disagree. By plaintiffs’ own telling, their claim is based on defendants’ “provision of the infrastructure which provides material support to ISIS.” App. 53. Viewed properly, defendants’ “recommendation” algorithms are merely part of that infrastructure. All the content on their platforms is filtered through these algorithms, which allegedly sort the content by information and inputs provided by users and found in the content itself. As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting. Once the platform and sorting-tool algorithms were up and running, defendants at most allegedly stood back and watched; they are not alleged to have taken any further action with respect to ISIS.

Again, I’ve seen some concerns that this language opens up some potential messiness about AI and “neutrality,” but I’m actually pretty pleased with the language used here, which avoids saying “neutral” (a completely meaningless word in the context of algorithms whose entire purpose is to recommend stuff) and talks about providing general tools that just try to provide any user with results that match their interests.

Basically, my read on this is that the court is effectively saying that if you create algorithms that are just designed to take inputs and provide outputs based on those inputs, you’re in the clear. The only hypothetical where you might face some liability is if you designed an algorithm to deliberately produce violative content, like an AI tool whose sole job is to defame people (defAIMe?) or to take any input and purposefully try to convince you to engage in criminal acts. Those seem unlikely to actually exist in the first place, so the language above actually seems, again, to be pretty useful.

The ruling again doubles down on the fact that there was nothing specific to the social media sites that was deliberately designed to aid terrorists, and that makes the plaintiff’s argument nonsense:

First, the relationship between defendants and the Reina attack is highly attenuated. As noted above, defendants’ platforms are global in scale and allow hundreds of millions (or billions) of people to upload vast quantities of information on a daily basis. Yet, there are no allegations that defendants treated ISIS any differently from anyone else. Rather, defendants’ relationship with ISIS and its supporters appears to have been the same as their relationship with their billion-plus other users: arm’s length, passive, and largely indifferent. Cf. Halberstam, 705 F. 2d, at 488. And their relationship with the Reina attack is even further removed, given the lack of allegations connecting the Reina attack with ISIS’ use of these platforms.

Second, because of the distance between defendants’ acts (or failures to act) and the Reina attack, plaintiffs would need some other very good reason to think that defendants were consciously trying to help or otherwise “participate in” the Reina attack. Nye & Nissen, 336 U. S., at 619 (internal quotation marks omitted). But they have offered no such reason, let alone a good one. Again, plaintiffs point to no act of encouraging, soliciting, or advising the commission of the Reina attack that would normally support an aidingand-abetting claim. See 2 LaFave §13.2(a), at 457. Rather, they essentially portray defendants as bystanders, watching passively as ISIS carried out its nefarious schemes. Such allegations do not state a claim for culpable assistance or participation in the Reina attack.

Also important, the court makes it clear that a “failure to act” can’t actually trigger liability here:

Because plaintiffs’ complaint rests so heavily on defendants’ failure to act, their claims might have more purchase if they could identify some independent duty in tort that would have required defendants to remove ISIS’ content. See Woodward, 522 F. 2d, at 97, 100. But plaintiffs identify no duty that would require defendants or other communication-providing services to terminate customers after discovering that the customers were using the service for illicit ends. See Doe, 347 F. 3d, at 659; People v. Brophy, 49 Cal. App. 2d 15, 33–34 (1942).14 To be sure, there may be situations where some such duty exists, and we need not resolve the issue today. Even if there were such a duty here, it would not transform defendants’ distant inaction into knowing and substantial assistance that could establish aiding and abetting the Reina attack.

Is there the possibility of some nonsense sneaking into the second half of that paragraph? Eh… I could see some plaintiffs’ lawyers trying to make cases out of it, but I think the courts would still reject most of them.

Similarly, there is some language around hypothetical ways in which secondary liability could apply, but the Court is pretty clear that there has to be something beyond just providing ordinary services to reach the necessary bar:

To be sure, we cannot rule out the possibility that some set of allegations involving aid to a known terrorist group would justify holding a secondary defendant liable for all of the group’s actions or perhaps some definable subset of terrorist acts. There may be, for example, situations where the provider of routine services does so in an unusual way or provides such dangerous wares that selling those goods to a terrorist group could constitute aiding and abetting a foreseeable terror attack. Cf. Direct Sales Co. v. United States, 319 U. S. 703, 707, 711–712, 714–715 (1943) (registered morphine distributor could be liable as a coconspirator of an illicit operation to which it mailed morphine far in excess of normal amounts). Or, if a platform consciously and selectively chose to promote content provided by a particular terrorist group, perhaps it could be said to have culpably assisted the terrorist group….

In those cases, the defendants would arguably have offered aid that is more direct, active, and substantial than what we review here; in such cases, plaintiffs might be able to establish liability with a lesser showing of scienter. But we need not consider every iteration on this theme. In this case, it is enough that there is no allegation that the platforms here do more than transmit information by billions of people, most of whom use the platforms for interactions that once took place via mail, on the phone, or in public areas.

And from there, the Court makes a key point: just because some bad people use a platform for bad purposes, it doesn’t make the platform liable, and (even better) Justice Thomas highlights that any other holding would be a disaster (basically making the argument for Section 230 without talking about 230).

The fact that some bad actors took advantage of these platforms is insufficient to state a claim that defendants knowingly gave substantial assistance and thereby aided and abetted those wrongdoers’ acts. And that is particularly true because a contrary holding would effectively hold any sort of communication provider liable for any sort of wrongdoing merely for knowing that the wrongdoers were using its services and failing to stop them. That conclusion would run roughshod over the typical limits on tort liability and take aiding and abetting far beyond its essential culpability moorings.

Thus, based on all this, the court says the 9th Circuit ruling that allowed the Taamneh case to move forward was clearly mistaken, and sends it back to the Court. Specifically, it dings the 9th for having “misapplied the ‘knowing’ half of ‘knowing and substantial assistance.’”

At the very very end, the ruling does mention questions regarding Google and payments to users, and whether or not that might reach aiding and abetting. But, importantly, that issue isn’t really before the court, because the plaintiffs effectively dropped it. It’s possible that the issue could live on, but again, I don’t see how it becomes problematic.

Overall, this was kind of a weird case and a weird ruling. SCOTUS seems to have recognized they never should have taken the case in the first place, and this ruling effectively allowed them to back out of making a ruling on 230 that they would regret. However, instead, Justice Thomas, of all people, more or less laid out all of the reasons why 230 exists and why we want that in place, to make sure that liability applies to the party actually making something violative, rather than the incidental tools used in the process.

Separately, it does seem at least marginally noteworthy that, while not directly addressing Section 230 (and explicitly saying they wouldn’t rule on the issue today), Thomas didn’t also file a concurrence with the Gonzalez ruling begging for more 230 cases. As you may know, Thomas seemed to skip no opportunity to file random concurrences on issues unrelated to 230 to muse broadly on 230 and how he had views on the law. And here, he didn’t. Rather he wrote a ruling that sounds kinda like it could be a defense of Section 230. Maybe he’s learning?

In the end, this result is probably about as good as we could have hoped for. It leaves 230 in place, doesn’t add any really dangerous dicta that can lead to abuse (as far as I can tell).

It also serves to reinforce a key point: contrary to the belief of many, 230 is not the singular law that protects internet websites from liability. Lots of other things do as well. 230 really only serves as an express lane to get to the same exact result. That’s important, because it saves money, time, and resources from being wasted on cases that are going to fail in the end anyway. But it doesn’t mean that changing or removing 230 won’t magically make companies liable for things their users do. It won’t.

Finally, speaking about money, time, and resources, a shit ton of all three were spent on briefs from amici for the Gonzalez case, in which dozens were filed (including one from us). And… the end result was a three page per curiam basically saying “we’re not going to deal with this one.” The end result is good, and maybe it wouldn’t have been without all those briefs. However, that was an incredible amount of effort that had to be spent for the Supreme Court to basically say “eh, we’ll deal with this some other time.”

The Supreme Court might not care about all that effort expended for effectively nothing, but it does seem like a wasteful experience for nearly everyone involved.

Supreme Court Leaves 230 Alone For Now, But Justice Thomas Gives A Pretty Good Explanation For Why It Exists In The First Place

More Law-Related Stories From Techdirt:

Congrats, People Of Montana: Your Governor Is About To Blow A Ton Of Taxpayer Money On An Unconstitutional TikTok Ban
As Trial Over Illegal Traffic Stops Begins, Highway Patrol Admits It Doesn’t Track Rights Violations By Troopers
How About Using AI To Determine Whether Or Not Something Is Creative Enough To Get Copyright Protection

CRM Banner