Section 230 is subconstitutional free speech law. One might naively expect it can steer clear of the notorious complexity of First Amendment law, and for the most part it does. Both arms of §230 establish broad and simple rules. There is no mucking about with actual malice, public versus private figures, traditional versus limited public forums, tiers of scrutiny, or any of the other Ptolemaic doctrinal baggage of the First Amendment. Section 230(c)(1) avoids waking the slumbering giant by granting immunity rather than imposing liability for speech, §230(c)(2) by giving private actors rather than state actors a privilege to block speech on their platforms.

Even so, debates about §230's reach have an oddly familiar ring to them. The thrust and parry of arguments about when online speech should stay up or come down recapitulate well-worn arguments about when offline speech should or shouldn't be allowed. There are, I think, three things going on. One is that §230 itself is always open to challenge. It may be good law, but that doesn't tell us whether it's a good law. The second is that even though §230's protection is absolute and its coverage broad, its coverage still has limits (as any law's must). Some of those limits look a lot like the limits on the scope of “speech” under the First Amendment. And the third is that §230 by design gives platforms substantial freedom to allow speech or to restrict it. In choosing how to exercise that freedom, they have to confront the same conflicts that animate First Amendment doctrine. All three of these open the door to the kinds of arguments that one regularly sees in First Amendment cases and free speech debates.

Speech vs. conduct. The line between “speech” and conduct” in First Amendment doctrine is contested, and so is the corresponding line in §230 between “information” or “material” of which one can be the “publisher or speaker” and everything else. Some plaintiffs try to plead out of 230 by arguing that failing to supervise sex traffickers, or providing service to terrorists, is conduct rather than speech. And some sharing-economy platforms like AirBnB try to plead into §230 by arguing that they provide a forum for users to speak (albeit in ways that often lead to transactions).

Hate speech and harassment. When do hate speech against groups and harassing speech against individuals go too far? Different countries answer the question in different ways — and so do different platforms. Those arguing for tighter crackdowns make familiar claims about threats, coordinated attacks, psychological abuse, and expressive harms. Those arguing against make equally familiar claims about political speech, counter-speech, chilling effects, and excessive sensitivity.

Intellectual property. Section 230, for better or worse, carves out from its preemption “any law pertaining to intellectual property.” But for better or worse, the First Amendment also gives special deference to IP laws. The result is that invoking IP—particularly copyright—is a common plaintiffs' tactic for avoiding §230. Some of this is boundary work: the IP fields have their own frameworks for dealing with secondary liability (e.g., §512). But there is also an interesting subconstitutional leveling taking place within IP: recent expansions in fair use are equally available to online and offline defendants.

Rules vs. standards. Very few platforms protected by §230 allow all of the speech they legally could. But policies distinguishing between permissible and impermissible speech (e.g. spam vs. ham), and policies backed up with sanctions (e.g., account deletion) raise familiar jurisprudential problems. In First Amendment terms, platforms and their critics worry about overbreadth, underinclusion, vagueness, and discriminatory enforcement. Case in point: Twitter's endless struggle to develop a workable harassment and hate speech policy and make it stick.

Contemporary community standards. The Internet's breakdown of geographic barriers challenges the First Amendment's reliance on local community norms to define obscenity. Section 230(e)(1) specifically defers to federal obscenity laws, so online platforms have to live with that uncertainty. But even if they didn't, the same problem recurs one level down: how much should a platform allow for diverse and conflicting local norms about acceptable freedom of expression? Consider Reddit's repeated near-meltdowns over the antics of “problematic” subreddits like r/creepshots and r/TheDonald. Any sufficiently large and diverse platform must confront Godel's Theorem of Liberalism: no social system can be both consistent and completely tolerant.

State action. One of the most important moving parts in the standard defense of strong First Amendment protections for noxious speech is that individuals can avoid most of it in practice because private actors are free to speak, listen, and convey speech as they choose. The state-action, public-forum doctrine, and government-speech doctrines may be confused and confusing, but they draw a crucial legal and normative line. Even if Internet platforms are currently clearly private for First Amendment purposes, they often regard themselves as having a responsibility to behave responsibly, which they define in ways that rely on traditionally public rule-of-law virtues like availability to all, neutrality, fair notice, and consistency.

Platform speech. Platforms are always ambivalent about the speech they carry: they want to be praised (and sometimes paid) for it, but they also don't want to blamed for it. In the First Amendment context, every medium presents the issue of when a platform for others' speech itself “speaks,” with all the attendant rights and responsibilities. Section 230(c)(1) allows platforms to be extraordinarily hands-off; §230(c)(2) lets them be extraordinarily hands-on; the combination of the two lets them be anywhere in between. Plaintiffs sometimes try to argue that one choice or another gives a platform an obligation to allow their speech or to remove someone else's. These arguments usually fail — but there is a line here, and there has to be, because §230 by its very nature distinguishes between first-party and third-party speech. Perhaps the Roommates.com “contributes materially to the alleged illegality” test is messy for the same reasons that the First Amendment government-speech cases are messy.

Jurisdiction. Free speech issues are global, and different countries have different free speech norms. Anyone who speaks in a way accessible to people in more than one country has to contend with the differences. This is a context in which §230 may not make much of a difference. Any platform with an international reach is going to have to contend with other countries' more restrictive laws anyway, and those countries may not much care whether American free speech law acts at the constitutional or statutory level. The most important piece of the puzzle here may actually be the SPEECH Act, which explicitly incorporates §230 in making it hard to enforce foreign defamation judgments in the United States — helping give local American platforms the ability simply to ignore what other countries have to say.

* * *

Section 230, everyone agrees, singles out online speech for special solicitude. One dimension of this solicitude is familiar. By protecting online speech more robustly than offline speech, §230 is an example of what Eric Goldman calls “Internet exceptionalism.” Zeran confirmed that online speech intermediaries would be shielded from liability in cases where their offline counterparts would not, and much of the debate around §230 is over the wisdom of this choice. (Personally, I agree with Felix Wu: the risks of collateral censorship on Internet-scale platforms are serious enough that this special immunity is usually justified.

But at the risk of stating the obvious, the other half of the term also matters. Section 230 protects online speech, yes, but it also protects online speech. It is the 21st-century First Amendment. Like any true heir, it has received a great deal from its predecessor: not just the family fortune, but the family feuds as well.

James Grimmelmann is a professor of law at Cornell Tech and Cornell Law School. He studies how laws regulating software affect freedom, wealth, and power. He helps lawyers and technologists understand each other, applying ideas from computer science to problems in law and vice versa.

This essay is part of a larger collection about the impact of Zeran v. AOL curated by Eric Goldman and Jeff Kosseff.

Section 230 is subconstitutional free speech law. One might naively expect it can steer clear of the notorious complexity of First Amendment law, and for the most part it does. Both arms of §230 establish broad and simple rules. There is no mucking about with actual malice, public versus private figures, traditional versus limited public forums, tiers of scrutiny, or any of the other Ptolemaic doctrinal baggage of the First Amendment. Section 230(c)(1) avoids waking the slumbering giant by granting immunity rather than imposing liability for speech, §230(c)(2) by giving private actors rather than state actors a privilege to block speech on their platforms.

Even so, debates about §230's reach have an oddly familiar ring to them. The thrust and parry of arguments about when online speech should stay up or come down recapitulate well-worn arguments about when offline speech should or shouldn't be allowed. There are, I think, three things going on. One is that §230 itself is always open to challenge. It may be good law, but that doesn't tell us whether it's a good law. The second is that even though §230's protection is absolute and its coverage broad, its coverage still has limits (as any law's must). Some of those limits look a lot like the limits on the scope of “speech” under the First Amendment. And the third is that §230 by design gives platforms substantial freedom to allow speech or to restrict it. In choosing how to exercise that freedom, they have to confront the same conflicts that animate First Amendment doctrine. All three of these open the door to the kinds of arguments that one regularly sees in First Amendment cases and free speech debates.

Speech vs. conduct. The line between “speech” and conduct” in First Amendment doctrine is contested, and so is the corresponding line in §230 between “information” or “material” of which one can be the “publisher or speaker” and everything else. Some plaintiffs try to plead out of 230 by arguing that failing to supervise sex traffickers, or providing service to terrorists, is conduct rather than speech. And some sharing-economy platforms like AirBnB try to plead into §230 by arguing that they provide a forum for users to speak (albeit in ways that often lead to transactions).

Hate speech and harassment. When do hate speech against groups and harassing speech against individuals go too far? Different countries answer the question in different ways — and so do different platforms. Those arguing for tighter crackdowns make familiar claims about threats, coordinated attacks, psychological abuse, and expressive harms. Those arguing against make equally familiar claims about political speech, counter-speech, chilling effects, and excessive sensitivity.

Intellectual property. Section 230, for better or worse, carves out from its preemption “any law pertaining to intellectual property.” But for better or worse, the First Amendment also gives special deference to IP laws. The result is that invoking IP—particularly copyright—is a common plaintiffs' tactic for avoiding §230. Some of this is boundary work: the IP fields have their own frameworks for dealing with secondary liability (e.g., §512). But there is also an interesting subconstitutional leveling taking place within IP: recent expansions in fair use are equally available to online and offline defendants.

Rules vs. standards. Very few platforms protected by §230 allow all of the speech they legally could. But policies distinguishing between permissible and impermissible speech (e.g. spam vs. ham), and policies backed up with sanctions (e.g., account deletion) raise familiar jurisprudential problems. In First Amendment terms, platforms and their critics worry about overbreadth, underinclusion, vagueness, and discriminatory enforcement. Case in point: Twitter's endless struggle to develop a workable harassment and hate speech policy and make it stick.

Contemporary community standards. The Internet's breakdown of geographic barriers challenges the First Amendment's reliance on local community norms to define obscenity. Section 230(e)(1) specifically defers to federal obscenity laws, so online platforms have to live with that uncertainty. But even if they didn't, the same problem recurs one level down: how much should a platform allow for diverse and conflicting local norms about acceptable freedom of expression? Consider Reddit's repeated near-meltdowns over the antics of “problematic” subreddits like r/creepshots and r/TheDonald. Any sufficiently large and diverse platform must confront Godel's Theorem of Liberalism: no social system can be both consistent and completely tolerant.

State action. One of the most important moving parts in the standard defense of strong First Amendment protections for noxious speech is that individuals can avoid most of it in practice because private actors are free to speak, listen, and convey speech as they choose. The state-action, public-forum doctrine, and government-speech doctrines may be confused and confusing, but they draw a crucial legal and normative line. Even if Internet platforms are currently clearly private for First Amendment purposes, they often regard themselves as having a responsibility to behave responsibly, which they define in ways that rely on traditionally public rule-of-law virtues like availability to all, neutrality, fair notice, and consistency.

Platform speech. Platforms are always ambivalent about the speech they carry: they want to be praised (and sometimes paid) for it, but they also don't want to blamed for it. In the First Amendment context, every medium presents the issue of when a platform for others' speech itself “speaks,” with all the attendant rights and responsibilities. Section 230(c)(1) allows platforms to be extraordinarily hands-off; §230(c)(2) lets them be extraordinarily hands-on; the combination of the two lets them be anywhere in between. Plaintiffs sometimes try to argue that one choice or another gives a platform an obligation to allow their speech or to remove someone else's. These arguments usually fail — but there is a line here, and there has to be, because §230 by its very nature distinguishes between first-party and third-party speech. Perhaps the Roommates.com “contributes materially to the alleged illegality” test is messy for the same reasons that the First Amendment government-speech cases are messy.

Jurisdiction. Free speech issues are global, and different countries have different free speech norms. Anyone who speaks in a way accessible to people in more than one country has to contend with the differences. This is a context in which §230 may not make much of a difference. Any platform with an international reach is going to have to contend with other countries' more restrictive laws anyway, and those countries may not much care whether American free speech law acts at the constitutional or statutory level. The most important piece of the puzzle here may actually be the SPEECH Act, which explicitly incorporates §230 in making it hard to enforce foreign defamation judgments in the United States — helping give local American platforms the ability simply to ignore what other countries have to say.

* * *

Section 230, everyone agrees, singles out online speech for special solicitude. One dimension of this solicitude is familiar. By protecting online speech more robustly than offline speech, §230 is an example of what Eric Goldman calls “Internet exceptionalism.” Zeran confirmed that online speech intermediaries would be shielded from liability in cases where their offline counterparts would not, and much of the debate around §230 is over the wisdom of this choice. (Personally, I agree with Felix Wu: the risks of collateral censorship on Internet-scale platforms are serious enough that this special immunity is usually justified.

But at the risk of stating the obvious, the other half of the term also matters. Section 230 protects online speech, yes, but it also protects online speech. It is the 21st-century First Amendment. Like any true heir, it has received a great deal from its predecessor: not just the family fortune, but the family feuds as well.

James Grimmelmann is a professor of law at Cornell Tech and Cornell Law School. He studies how laws regulating software affect freedom, wealth, and power. He helps lawyers and technologists understand each other, applying ideas from computer science to problems in law and vice versa.

This essay is part of a larger collection about the impact of Zeran v. AOL curated by Eric Goldman and Jeff Kosseff.