The modern legal dialectic around the First Amendment is harsh and dauntingly complicated. The prevailing topical U.S. Supreme Court jurisprudence values free speech because it can contribute to human meaning-making and construction of selfhood, and has the potential to produce the sorts of ideas and information that can lead to human enlightenment. The court also deeply distrusts governmental regulation of speech, and has articulated powerful doubts about the government's ability to competently balance social costs and benefits pertaining to speech, especially when driven by censorial motives. Actual living human beings and their emotions do not much factor into either the court's positive or negative justifications for free speech. See generally Toni Massaro, Helen Norton and Margot Kaminski, “SIRI-OUSLY 2.0: What Artificial Intelligence Reveals about the First Amendment,” 101 Minnesota Law Review 2481 (2017).

Section 230 takes this free speech-rooted disregard for people and their feelings, and ramps it up a few notches, immunizing online media companies from liability for hosting not only anything the First Amendment protects, but also from the reach of most of the very limited speech restrictions that First Amendment jurisprudence disdainfully tolerates.

Internet Service Providers (ISPs) can host maliciously defamatory speech that would not be protected by the First Amendment. They can host threats of violence that are outside the First Amendment. They can host obscenity as long as it is not comprised of child pornography, and they can host panic-inducing online equivalents of shouts of “Fire!” in crowded theaters without fearing civil suit or arrest, as long as no federal crime is committed.

As it happens, defamatory speech, threats and obscenity almost never rise to the level of federal crimes. According to one legal scholar, “it is now generally accepted that the First Amendment forbids criminal penalties for defamation.” Actionable threats must be “true threats” and require a higher level of culpability than negligence; it is not clear that even a showing of recklessness would be adequate. And the federal government has only rarely pursued obscenity charges for content that did not involve or depict children since 1988. Even when completely outside the protections of the First Amendment, almost any speech can be hosted on a wholly for-profit basis, featuring paid advertisements or charging subscription fees, without fear of legal responsibility.

Section 230 asks nothing in return for this extensive ISP immunity. The ISPs can't be forced to remove offending content unless it fits within what are mostly very narrow exceptions, as demonstrated by twenty years of litigation. The only broadly interpreted immunity exception is for intellectual property, which §230 actually cares about because it is rooted in money and commerce and intangible “property” rather than people and their messy and seemingly inconsequential emotions.

ISPs don't have to keep track of who posts what, or identify any person doing the offensive posting unless they want to, or choose to comply with an appropriately drafted and served subpoena, meaning legal representation is generally necessary to successfully identify the source of harmful speech.

Section 230 has therefore made hosting defamation, threats and exhortations that lead to panic or violence into a lucrative online business models. Twenty years ago, AOL strategically ignored Ken Zeran's horrific victimization by an anonymous internet hoaxer.

Today, acts of online harassment directed at contemporary Ken Zerans are more likely to fill the enormous coffers of companies like Google, Facebook, Twitter, GoDaddy and Reddit. The platforms may change over time but the basic framework remains the same. Eyeball attraction generates demand for online services such as web hosting, cloud computing, advertising, data analytics, storage, and domain name registration.

Hatred can be very profitable. Research conducted by ProPublica “surveyed the most visited websites of groups designated as extremist by either the SPLC or the Anti-Defamation League … [and] found that more than half of them—39 out of 69—made money from ads, donations or other revenue streams facilitated by technology companies. At least 10 tech companies played a role directly or indirectly in supporting these sites.” ProPublica further found that: “PayPal, the payment processor, has a policy against working with sites that use its service for “the promotion of hate, violence, [or] racial intolerance.” Yet it was by far the top tech provider to the hate sites with donation links on 23 sites, or about one-third of those surveyed by ProPublica.”

A recent Pew Research Center survey found that 41% of adult Americans “have been personally subjected to harassing behavior online, and an even larger share (66%) has witnessed these behaviors directed at others. … [N]early one-in-five Americans (18%) have been subjected to particularly severe forms of harassment online, such as physical threats, harassment over a sustained period, sexual harassment or stalking.” A full 58% of those who have been harassed online said it happened via social media, while for 23% their “most recent” harassment experience occurred in the comments sections of a website; for 15% the harassment occurred via a text or messaging app. Occasionally, ISPs will help out individual harassment victims. But they are not required to do so, and usually they will not.

A few large social media platforms are voluntarily addressing some online harassment campaigns to appease advertisers and large, well organized interest groups, with intermediations that focus on hate speech targeted at groups that share common characteristics such as race, gender, sexual orientation, political beliefs or religion. Some affected individuals see such interventions as inadequate, while other people see them as censorious threats to freedom of expression online. The companies that own these platforms are much more likely base their strategies for addressing online harassment on what is most profitable than striving to carefully balance privacy, safety, and speech interests. Section 230 endorses an approach to speech that is entirely driven by money. The online media companies that rein in threats and hate speech on their platforms in turn create profitable opportunities for the emergence of new social media platforms on which anything goes.

Even with a strong commitment to expansive free speech principles, a sense of decency and fair play should make one question the legitimacy of §230 as currently written and interpreted. Manufacturers, food producers, and companies in the service industries have to take responsibility for goods, no matter how large the company or how prodigious its output. But gigantic, fabulously wealthy companies like Facebook, Google and Twitter do not have to take any responsibility for the harms caused by the online platforms they own, control and profit from. Section 230 means that companies are allowed to facilitate or ignore speech-ignited harms they absolutely have the right and ability to control, as long as someone else is the speaker.

Some people tout §230 as the law that created the Internet. But given the willingness of social media, e-commerce, Internet search and web hosting businesses to do business in nations that lack laws anything like §230, histrionic claims that without §230 successful and creative online companies like Google, Facebook and Twitter would not exist or could not thrive are unsupportable.

As I have argued before ISPs would still be profitable even if they were required to affirmatively mitigate the most severe of the harms that result from some portion of the online speech they host. China has one of the most highly censored Internets in the world, and it still has highly innovative and extremely profitable Internet companies. This is not at all, in any regard, a suggestion that the United States should follow China's example with respect to Internet regulation. It is simply to note that China has high levels of both censorship and innovation simultaneously, and remains a desirable market for U.S. companies despite the intensive censorship. Chinese social media company Tencent is the second largest in the world, second only to Facebook, and both Tencent and its Chinese social media competitor NetEase are larger and more profitable than Twitter. And Facebook, currently blocked by the Great Firewall of China, is still trying to find its way back into the Chinese market, using innovative approaches. So is Google.

Germany has recently instituted a law against hate speech that will require ISPs to police their own platforms. This law applies to social media sites with more than two million users in Germany. Other European Union members may do the same. But no large Internet company has yet suggested it will retreat from the German Internet or from the European Union generally. Again, this is not an endorsement of Germany's approach. It is simply offered as further evidence that the absence of §230 style ISP immunity does not dissuade large Internet companies from participation or profit seeking.

In the United States, as long as §230 remains in place and unchanged, the only options for badly victimized parties are to engage potentially costly lawyers who may not be able to help them, or to employ expensive and potentially unsavory reputation defense companies that have few if any effective tools to offer. Thoughtfully carving out a few more exceptions to §230 aimed at reducing serious online harassment will not break the Internet.

Ann Bartow is a professor of law at University of New Hampshire School of Law, where she has led the Franklin Pierce Center for Intellectual Property since 2015. Prior to entering the academy in 1995, Professor Bartow practiced law at the firm then known as McCutchen, Doyle, Brown & Enersen in San Francisco.

This essay is part of a larger collection about the impact of Zeran v. AOL curated by Eric Goldman and Jeff Kosseff.

The modern legal dialectic around the First Amendment is harsh and dauntingly complicated. The prevailing topical U.S. Supreme Court jurisprudence values free speech because it can contribute to human meaning-making and construction of selfhood, and has the potential to produce the sorts of ideas and information that can lead to human enlightenment. The court also deeply distrusts governmental regulation of speech, and has articulated powerful doubts about the government's ability to competently balance social costs and benefits pertaining to speech, especially when driven by censorial motives. Actual living human beings and their emotions do not much factor into either the court's positive or negative justifications for free speech. See generally Toni Massaro, Helen Norton and Margot Kaminski, “SIRI-OUSLY 2.0: What Artificial Intelligence Reveals about the First Amendment,” 101 Minnesota Law Review 2481 (2017).

Section 230 takes this free speech-rooted disregard for people and their feelings, and ramps it up a few notches, immunizing online media companies from liability for hosting not only anything the First Amendment protects, but also from the reach of most of the very limited speech restrictions that First Amendment jurisprudence disdainfully tolerates.

Internet Service Providers (ISPs) can host maliciously defamatory speech that would not be protected by the First Amendment. They can host threats of violence that are outside the First Amendment. They can host obscenity as long as it is not comprised of child pornography, and they can host panic-inducing online equivalents of shouts of “Fire!” in crowded theaters without fearing civil suit or arrest, as long as no federal crime is committed.

As it happens, defamatory speech, threats and obscenity almost never rise to the level of federal crimes. According to one legal scholar, “it is now generally accepted that the First Amendment forbids criminal penalties for defamation.” Actionable threats must be “true threats” and require a higher level of culpability than negligence; it is not clear that even a showing of recklessness would be adequate. And the federal government has only rarely pursued obscenity charges for content that did not involve or depict children since 1988. Even when completely outside the protections of the First Amendment, almost any speech can be hosted on a wholly for-profit basis, featuring paid advertisements or charging subscription fees, without fear of legal responsibility.

Section 230 asks nothing in return for this extensive ISP immunity. The ISPs can't be forced to remove offending content unless it fits within what are mostly very narrow exceptions, as demonstrated by twenty years of litigation. The only broadly interpreted immunity exception is for intellectual property, which §230 actually cares about because it is rooted in money and commerce and intangible “property” rather than people and their messy and seemingly inconsequential emotions.

ISPs don't have to keep track of who posts what, or identify any person doing the offensive posting unless they want to, or choose to comply with an appropriately drafted and served subpoena, meaning legal representation is generally necessary to successfully identify the source of harmful speech.

Section 230 has therefore made hosting defamation, threats and exhortations that lead to panic or violence into a lucrative online business models. Twenty years ago, AOL strategically ignored Ken Zeran's horrific victimization by an anonymous internet hoaxer.

Today, acts of online harassment directed at contemporary Ken Zerans are more likely to fill the enormous coffers of companies like Google, Facebook, Twitter, GoDaddy and Reddit. The platforms may change over time but the basic framework remains the same. Eyeball attraction generates demand for online services such as web hosting, cloud computing, advertising, data analytics, storage, and domain name registration.

Hatred can be very profitable. Research conducted by ProPublica “surveyed the most visited websites of groups designated as extremist by either the SPLC or the Anti-Defamation League … [and] found that more than half of them—39 out of 69—made money from ads, donations or other revenue streams facilitated by technology companies. At least 10 tech companies played a role directly or indirectly in supporting these sites.” ProPublica further found that: “PayPal, the payment processor, has a policy against working with sites that use its service for “the promotion of hate, violence, [or] racial intolerance.” Yet it was by far the top tech provider to the hate sites with donation links on 23 sites, or about one-third of those surveyed by ProPublica.”

A recent Pew Research Center survey found that 41% of adult Americans “have been personally subjected to harassing behavior online, and an even larger share (66%) has witnessed these behaviors directed at others. … [N]early one-in-five Americans (18%) have been subjected to particularly severe forms of harassment online, such as physical threats, harassment over a sustained period, sexual harassment or stalking.” A full 58% of those who have been harassed online said it happened via social media, while for 23% their “most recent” harassment experience occurred in the comments sections of a website; for 15% the harassment occurred via a text or messaging app. Occasionally, ISPs will help out individual harassment victims. But they are not required to do so, and usually they will not.

A few large social media platforms are voluntarily addressing some online harassment campaigns to appease advertisers and large, well organized interest groups, with intermediations that focus on hate speech targeted at groups that share common characteristics such as race, gender, sexual orientation, political beliefs or religion. Some affected individuals see such interventions as inadequate, while other people see them as censorious threats to freedom of expression online. The companies that own these platforms are much more likely base their strategies for addressing online harassment on what is most profitable than striving to carefully balance privacy, safety, and speech interests. Section 230 endorses an approach to speech that is entirely driven by money. The online media companies that rein in threats and hate speech on their platforms in turn create profitable opportunities for the emergence of new social media platforms on which anything goes.

Even with a strong commitment to expansive free speech principles, a sense of decency and fair play should make one question the legitimacy of §230 as currently written and interpreted. Manufacturers, food producers, and companies in the service industries have to take responsibility for goods, no matter how large the company or how prodigious its output. But gigantic, fabulously wealthy companies like Facebook, Google and Twitter do not have to take any responsibility for the harms caused by the online platforms they own, control and profit from. Section 230 means that companies are allowed to facilitate or ignore speech-ignited harms they absolutely have the right and ability to control, as long as someone else is the speaker.

Some people tout §230 as the law that created the Internet. But given the willingness of social media, e-commerce, Internet search and web hosting businesses to do business in nations that lack laws anything like §230, histrionic claims that without §230 successful and creative online companies like Google, Facebook and Twitter would not exist or could not thrive are unsupportable.

As I have argued before ISPs would still be profitable even if they were required to affirmatively mitigate the most severe of the harms that result from some portion of the online speech they host. China has one of the most highly censored Internets in the world, and it still has highly innovative and extremely profitable Internet companies. This is not at all, in any regard, a suggestion that the United States should follow China's example with respect to Internet regulation. It is simply to note that China has high levels of both censorship and innovation simultaneously, and remains a desirable market for U.S. companies despite the intensive censorship. Chinese social media company Tencent is the second largest in the world, second only to Facebook, and both Tencent and its Chinese social media competitor NetEase are larger and more profitable than Twitter. And Facebook, currently blocked by the Great Firewall of China, is still trying to find its way back into the Chinese market, using innovative approaches. So is Google.

Germany has recently instituted a law against hate speech that will require ISPs to police their own platforms. This law applies to social media sites with more than two million users in Germany. Other European Union members may do the same. But no large Internet company has yet suggested it will retreat from the German Internet or from the European Union generally. Again, this is not an endorsement of Germany's approach. It is simply offered as further evidence that the absence of §230 style ISP immunity does not dissuade large Internet companies from participation or profit seeking.

In the United States, as long as §230 remains in place and unchanged, the only options for badly victimized parties are to engage potentially costly lawyers who may not be able to help them, or to employ expensive and potentially unsavory reputation defense companies that have few if any effective tools to offer. Thoughtfully carving out a few more exceptions to §230 aimed at reducing serious online harassment will not break the Internet.

Ann Bartow is a professor of law at University of New Hampshire School of Law, where she has led the Franklin Pierce Center for Intellectual Property since 2015. Prior to entering the academy in 1995, Professor Bartow practiced law at the firm then known as McCutchen, Doyle, Brown & Enersen in San Francisco.

This essay is part of a larger collection about the impact of Zeran v. AOL curated by Eric Goldman and Jeff Kosseff.