Section 230 Keeps Platforms for Defamation and Threats Highly Profitable
Section 230 takes free speech-rooted disregard for people and their feelings, and ramps it up a few notches, immunizing online media companies from liability for hosting not only anything the First Amendment protects, but also from the reach of most of the very limited speech restrictions that First Amendment jurisprudence disdainfully tolerates.
November 10, 2017 at 01:10 AM
20 minute read
The modern legal dialectic around the First Amendment is harsh and dauntingly complicated. The prevailing topical U.S. Supreme Court jurisprudence values free speech because it can contribute to human meaning-making and construction of selfhood, and has the potential to produce the sorts of ideas and information that can lead to human enlightenment. The court also deeply distrusts governmental regulation of speech, and has articulated powerful doubts about the government's ability to competently balance social costs and benefits pertaining to speech, especially when driven by censorial motives. Actual living human beings and their emotions do not much factor into either the court's positive or negative justifications for free speech. See generally Toni Massaro, Helen Norton and Margot Kaminski, “SIRI-OUSLY 2.0: What Artificial Intelligence Reveals about the First Amendment,” 101 Minnesota Law Review 2481 (2017).
Section 230 takes this free speech-rooted disregard for people and their feelings, and ramps it up a few notches, immunizing online media companies from liability for hosting not only anything the First Amendment protects, but also from the reach of most of the very limited speech restrictions that First Amendment jurisprudence disdainfully tolerates.
Internet Service Providers (ISPs) can host maliciously defamatory speech that would not be protected by the First Amendment. They can host threats of violence that are outside the First Amendment. They can host obscenity as long as it is not comprised of child pornography, and they can host panic-inducing online equivalents of shouts of “Fire!” in crowded theaters without fearing civil suit or arrest, as long as no federal crime is committed.
As it happens, defamatory speech, threats and obscenity almost never rise to the level of federal crimes. According to one legal scholar, “it is now generally accepted that the First Amendment forbids criminal penalties for defamation.” Actionable threats must be “true threats” and require a higher level of culpability than negligence; it is not clear that even a showing of recklessness would be adequate. And the federal government has only rarely pursued obscenity charges for content that did not involve or depict children since 1988. Even when completely outside the protections of the First Amendment, almost any speech can be hosted on a wholly for-profit basis, featuring paid advertisements or charging subscription fees, without fear of legal responsibility.
Section 230 asks nothing in return for this extensive ISP immunity. The ISPs can't be forced to remove offending content unless it fits within what are mostly very narrow exceptions, as demonstrated by twenty years of litigation. The only broadly interpreted immunity exception is for intellectual property, which §230 actually cares about because it is rooted in money and commerce and intangible “property” rather than people and their messy and seemingly inconsequential emotions.
ISPs don't have to keep track of who posts what, or identify any person doing the offensive posting unless they want to, or choose to comply with an appropriately drafted and served subpoena, meaning legal representation is generally necessary to successfully identify the source of harmful speech.
Section 230 has therefore made hosting defamation, threats and exhortations that lead to panic or violence into a lucrative online business models. Twenty years ago, AOL strategically ignored Ken Zeran's horrific victimization by an anonymous internet hoaxer.
Today, acts of online harassment directed at contemporary Ken Zerans are more likely to fill the enormous coffers of companies like Google, Facebook, Twitter, GoDaddy and Reddit. The platforms may change over time but the basic framework remains the same. Eyeball attraction generates demand for online services such as web hosting, cloud computing, advertising, data analytics, storage, and domain name registration.
Hatred can be very profitable. Research conducted by ProPublica “surveyed the most visited websites of groups designated as extremist by either the SPLC or the Anti-Defamation League … [and] found that more than half of them—39 out of 69—made money from ads, donations or other revenue streams facilitated by technology companies. At least 10 tech companies played a role directly or indirectly in supporting these sites.” ProPublica further found that: “PayPal, the payment processor, has a policy against working with sites that use its service for “the promotion of hate, violence, [or] racial intolerance.” Yet it was by far the top tech provider to the hate sites with donation links on 23 sites, or about one-third of those surveyed by ProPublica.”
A recent Pew Research Center survey found that 41% of adult Americans “have been personally subjected to harassing behavior online, and an even larger share (66%) has witnessed these behaviors directed at others. … [N]early one-in-five Americans (18%) have been subjected to particularly severe forms of harassment online, such as physical threats, harassment over a sustained period, sexual harassment or stalking.” A full 58% of those who have been harassed online said it happened via social media, while for 23% their “most recent” harassment experience occurred in the comments sections of a website; for 15% the harassment occurred via a text or messaging app. Occasionally, ISPs will help out individual harassment victims. But they are not required to do so, and usually they will not.
A few large social media platforms are voluntarily addressing some online harassment campaigns to appease advertisers and large, well organized interest groups, with intermediations that focus on hate speech targeted at groups that share common characteristics such as race, gender, sexual orientation, political beliefs or religion. Some affected individuals see such interventions as inadequate, while other people see them as censorious threats to freedom of expression online. The companies that own these platforms are much more likely base their strategies for addressing online harassment on what is most profitable than striving to carefully balance privacy, safety, and speech interests. Section 230 endorses an approach to speech that is entirely driven by money. The online media companies that rein in threats and hate speech on their platforms in turn create profitable opportunities for the emergence of new social media platforms on which anything goes.
Even with a strong commitment to expansive free speech principles, a sense of decency and fair play should make one question the legitimacy of §230 as currently written and interpreted. Manufacturers, food producers, and companies in the service industries have to take responsibility for goods, no matter how large the company or how prodigious its output. But gigantic, fabulously wealthy companies like Facebook, Google and Twitter do not have to take any responsibility for the harms caused by the online platforms they own, control and profit from. Section 230 means that companies are allowed to facilitate or ignore speech-ignited harms they absolutely have the right and ability to control, as long as someone else is the speaker.
Some people tout §230 as the law that created the Internet. But given the willingness of social media, e-commerce, Internet search and web hosting businesses to do business in nations that lack laws anything like §230, histrionic claims that without §230 successful and creative online companies like Google, Facebook and Twitter would not exist or could not thrive are unsupportable.
As I have argued before ISPs would still be profitable even if they were required to affirmatively mitigate the most severe of the harms that result from some portion of the online speech they host. China has one of the most highly censored Internets in the world, and it still has highly innovative and extremely profitable Internet companies. This is not at all, in any regard, a suggestion that the United States should follow China's example with respect to Internet regulation. It is simply to note that China has high levels of both censorship and innovation simultaneously, and remains a desirable market for U.S. companies despite the intensive censorship. Chinese social media company Tencent is the second largest in the world, second only to Facebook, and both Tencent and its Chinese social media competitor NetEase are larger and more profitable than Twitter. And Facebook, currently blocked by the Great Firewall of China, is still trying to find its way back into the Chinese market, using innovative approaches. So is Google.
Germany has recently instituted a law against hate speech that will require ISPs to police their own platforms. This law applies to social media sites with more than two million users in Germany. Other European Union members may do the same. But no large Internet company has yet suggested it will retreat from the German Internet or from the European Union generally. Again, this is not an endorsement of Germany's approach. It is simply offered as further evidence that the absence of §230 style ISP immunity does not dissuade large Internet companies from participation or profit seeking.
In the United States, as long as §230 remains in place and unchanged, the only options for badly victimized parties are to engage potentially costly lawyers who may not be able to help them, or to employ expensive and potentially unsavory reputation defense companies that have few if any effective tools to offer. Thoughtfully carving out a few more exceptions to §230 aimed at reducing serious online harassment will not break the Internet.
Ann Bartow is a professor of law at University of New Hampshire School of Law, where she has led the Franklin Pierce Center for Intellectual Property since 2015. Prior to entering the academy in 1995, Professor Bartow practiced law at the firm then known as McCutchen, Doyle, Brown & Enersen in San Francisco.
This essay is part of a larger collection about the impact of Zeran v. AOL curated by Eric Goldman and Jeff Kosseff.
The modern legal dialectic around the First Amendment is harsh and dauntingly complicated. The prevailing topical U.S. Supreme Court jurisprudence values free speech because it can contribute to human meaning-making and construction of selfhood, and has the potential to produce the sorts of ideas and information that can lead to human enlightenment. The court also deeply distrusts governmental regulation of speech, and has articulated powerful doubts about the government's ability to competently balance social costs and benefits pertaining to speech, especially when driven by censorial motives. Actual living human beings and their emotions do not much factor into either the court's positive or negative justifications for free speech. See generally Toni Massaro, Helen Norton and Margot Kaminski, “SIRI-OUSLY 2.0: What Artificial Intelligence Reveals about the First Amendment,” 101 Minnesota Law Review 2481 (2017).
Section 230 takes this free speech-rooted disregard for people and their feelings, and ramps it up a few notches, immunizing online media companies from liability for hosting not only anything the First Amendment protects, but also from the reach of most of the very limited speech restrictions that First Amendment jurisprudence disdainfully tolerates.
Internet Service Providers (ISPs) can host maliciously defamatory speech that would not be protected by the First Amendment. They can host threats of violence that are outside the First Amendment. They can host obscenity as long as it is not comprised of child pornography, and they can host panic-inducing online equivalents of shouts of “Fire!” in crowded theaters without fearing civil suit or arrest, as long as no federal crime is committed.
As it happens, defamatory speech, threats and obscenity almost never rise to the level of federal crimes. According to one legal scholar, “it is now generally accepted that the First Amendment forbids criminal penalties for defamation.” Actionable threats must be “true threats” and require a higher level of culpability than negligence; it is not clear that even a showing of recklessness would be adequate. And the federal government has only rarely pursued obscenity charges for content that did not involve or depict children since 1988. Even when completely outside the protections of the First Amendment, almost any speech can be hosted on a wholly for-profit basis, featuring paid advertisements or charging subscription fees, without fear of legal responsibility.
Section 230 asks nothing in return for this extensive ISP immunity. The ISPs can't be forced to remove offending content unless it fits within what are mostly very narrow exceptions, as demonstrated by twenty years of litigation. The only broadly interpreted immunity exception is for intellectual property, which §230 actually cares about because it is rooted in money and commerce and intangible “property” rather than people and their messy and seemingly inconsequential emotions.
ISPs don't have to keep track of who posts what, or identify any person doing the offensive posting unless they want to, or choose to comply with an appropriately drafted and served subpoena, meaning legal representation is generally necessary to successfully identify the source of harmful speech.
Section 230 has therefore made hosting defamation, threats and exhortations that lead to panic or violence into a lucrative online business models. Twenty years ago, AOL strategically ignored Ken Zeran's horrific victimization by an anonymous internet hoaxer.
Today, acts of online harassment directed at contemporary Ken Zerans are more likely to fill the enormous coffers of companies like
Hatred can be very profitable. Research conducted by ProPublica “surveyed the most visited websites of groups designated as extremist by either the SPLC or the Anti-Defamation League … [and] found that more than half of them—39 out of 69—made money from ads, donations or other revenue streams facilitated by technology companies. At least 10 tech companies played a role directly or indirectly in supporting these sites.” ProPublica further found that: “PayPal, the payment processor, has a policy against working with sites that use its service for “the promotion of hate, violence, [or] racial intolerance.” Yet it was by far the top tech provider to the hate sites with donation links on 23 sites, or about one-third of those surveyed by ProPublica.”
A recent Pew Research Center survey found that 41% of adult Americans “have been personally subjected to harassing behavior online, and an even larger share (66%) has witnessed these behaviors directed at others. … [N]early one-in-five Americans (18%) have been subjected to particularly severe forms of harassment online, such as physical threats, harassment over a sustained period, sexual harassment or stalking.” A full 58% of those who have been harassed online said it happened via social media, while for 23% their “most recent” harassment experience occurred in the comments sections of a website; for 15% the harassment occurred via a text or messaging app. Occasionally, ISPs will help out individual harassment victims. But they are not required to do so, and usually they will not.
A few large social media platforms are voluntarily addressing some online harassment campaigns to appease advertisers and large, well organized interest groups, with intermediations that focus on hate speech targeted at groups that share common characteristics such as race, gender, sexual orientation, political beliefs or religion. Some affected individuals see such interventions as inadequate, while other people see them as censorious threats to freedom of expression online. The companies that own these platforms are much more likely base their strategies for addressing online harassment on what is most profitable than striving to carefully balance privacy, safety, and speech interests. Section 230 endorses an approach to speech that is entirely driven by money. The online media companies that rein in threats and hate speech on their platforms in turn create profitable opportunities for the emergence of new social media platforms on which anything goes.
Even with a strong commitment to expansive free speech principles, a sense of decency and fair play should make one question the legitimacy of §230 as currently written and interpreted. Manufacturers, food producers, and companies in the service industries have to take responsibility for goods, no matter how large the company or how prodigious its output. But gigantic, fabulously wealthy companies like Facebook,
Some people tout §230 as the law that created the Internet. But given the willingness of social media, e-commerce, Internet search and web hosting businesses to do business in nations that lack laws anything like §230, histrionic claims that without §230 successful and creative online companies like
As I have argued before ISPs would still be profitable even if they were required to affirmatively mitigate the most severe of the harms that result from some portion of the online speech they host. China has one of the most highly censored Internets in the world, and it still has highly innovative and extremely profitable Internet companies. This is not at all, in any regard, a suggestion that the United States should follow China's example with respect to Internet regulation. It is simply to note that China has high levels of both censorship and innovation simultaneously, and remains a desirable market for U.S. companies despite the intensive censorship. Chinese social media company Tencent is the second largest in the world, second only to Facebook, and both Tencent and its Chinese social media competitor NetEase are larger and more profitable than Twitter. And Facebook, currently blocked by the Great Firewall of China, is still trying to find its way back into the Chinese market, using innovative approaches. So is
Germany has recently instituted a law against hate speech that will require ISPs to police their own platforms. This law applies to social media sites with more than two million users in Germany. Other European Union members may do the same. But no large Internet company has yet suggested it will retreat from the German Internet or from the European Union generally. Again, this is not an endorsement of Germany's approach. It is simply offered as further evidence that the absence of §230 style ISP immunity does not dissuade large Internet companies from participation or profit seeking.
In the United States, as long as §230 remains in place and unchanged, the only options for badly victimized parties are to engage potentially costly lawyers who may not be able to help them, or to employ expensive and potentially unsavory reputation defense companies that have few if any effective tools to offer. Thoughtfully carving out a few more exceptions to §230 aimed at reducing serious online harassment will not break the Internet.
Ann Bartow is a professor of law at
This essay is part of a larger collection about the impact of Zeran v. AOL curated by Eric Goldman and Jeff Kosseff.
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllClimate Disputes, International Arbitration, and State Court Limitations for Global Issues
Judicial Face-Off: Navigating the Ethical and Efficient Use of AI in Legal Practice [CLE Pending]
4 minute readTrending Stories
- 1What Does Ohio Supreme Court's Opioid Decision Mean for Public Nuisance Claims?
- 2Bucking Industry Trend, Sidley Austin Elects Biggest Class of Partners in Firm History
- 3US Judge Throws Out Sale of Infowars to The Onion. But That's Not the End of the Road for Sandy Hook Families
- 4‘Really Deflating’: Judges React to Biden Threat to Veto New Judgeships Bill
- 53 Incidents Lead to Charges Against the Alexander Brothers; Cousin Remains at Large
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250