Despite Likely Section 230 Cover, Removing 'Fake News' Can be Tricky
Social media companies and websites are likely protected under Section 230 of the Communications Decency Act when removing fraudulent content from their platforms. But that doesn't make it easy.
March 22, 2018 at 12:00 PM
8 minute read
With new technology making it easier to create fraudulent video content and growing revelations about Russia's efforts to influence the 2016 U.S. presidential election, the clarion call to deal with the scourge of “fake news” has become ever louder.
Dealing with fraudulent content, however, is not as simple as one would think. In many cases, it is protected under the First Amendment and poses little to no liability to internet publishers who host such content on their websites or social media platforms.
But the law that protects publishers from legal action also likely enables them to remove fraudulent content at their discretion. In practice, however, removing such content can be quite tricky.
Section 230 of the Communications Decency Act grants certain types of immunity to “interactive computer services,” which have been defined broadly to include internet publishers like social media companies and websites. And though the statute is likely to be changed in the near future to allow prosecutors to go after internet services that enable or support sex trafficking, such changes do not impact law's immunity with regards to fraudulent content.
Under Section 230(c)(1) of the law, internet publishers are not liable for any content created and posted on their service by third-party content providers, such as users on Facebook or people posting on an online forum.
Nor are they liable, under Section 230(c)(2), for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”
There has been much case law on Section 230, including in Zeran v. America Online, Blumenthal v. Drudge and Milo v. Martin, affirming that internet publishers are not liable for fraudulent content posted by third parties, nor need to verify the accuracy of content posted by third parties.
Still, Section 230 doesn't address fraudulent content directly. “The fact is, it doesn't fall within any of those categories” in Section 230(c)(2), said Thomas Cafferty, director of commercial and criminal litigation, and leader of the media law team at Gibbons. However, he added that “you could curate fake news under the terms 'otherwise objectionable.'”
To be sure, even though there is no “fake news” category, there is little question that “fake news” could be removed by internet publishers without them losing Section 230 immunity.
“An internet company would be in their rights and within the immunity that Section 230 provides to take down and remove speech of a fraudulent nature,” said Markham Erickson, partner at Steptoe & Johnson and chair of the firm's internet, telecom and technology practice group.
Over the years, courts have interpreted Section 230(c)(2) broadly, providing flexibility to publishers to run their services in the way they choose.
The content that internet publishers can remove “is anything that in good faith a service might deem to be improper, offensive or otherwise undesirable, so it's important to recognize those criteria it spells out are not exclusive,” said Jeffrey Hermes, deputy director at the Media Law Resource Center, a trade organization for media attorneys.
As an example, Hermes pointed to the recent decision in Dawn Bennett v. Google, where the U.S. Court of Appeals for the District of Columbia ruled Google was protected under Section 230 from having to de-index a blog post that the plaintiff alleged was defamatory. What's more, the court rejected the argument that since Google enforces a “Content Policy” for its bloggers, it is influencing, and thereby creating, the content it publishes, and therefore does not qualify for Section 230 immunity.
That Google's content policy goes beyond the wording in what internet publishers can remove under Section 230(c)(2) underscores the law's malleability in granting publishers a wide range of editorial discretion.
“Internet service providers are allowed to enforce content guidelines without losing immunity, and allowing for this kind of control is one of the reasons Section 230 was created,” said Desiree Moore, partner at K&L Gates.
But she added, “platforms should be careful that content guidelines are applied fairly across all types of content.”
Indeed, if an internet publisher is found to not act in “good faith,” such as when it removes third-party content because it “competes with something they themselves are publishing, or removes it in order to advance another third party's interest,” the publisher may lose their Section 230, immunity, Hermes said.
This doesn't mean the publisher will automatically incur liability, just that it is more exposed without the Section 230 immunity. “The important thing to remember here is that you don't trigger liability under Section 230(c)(2) because you remove content. You only trigger liability if there is some other reason why the removal is wrongful,” Hermes said.
|Removing Fake News? Good Luck
While internet publishers are likely protected against removing fraudulent content, how to go about it is a matter of much debate.
“I think we are in this phase where different companies are experimenting with different ways to try to go after content that is fraudulent or violates their terms of service in some way,” Erickson said.
Twitter, for example, has begun to restrict users' ability to perform coordinated actions, such as posting or liking tweets simultaneously from multiple accounts, and has moved to take down automated “bot” accounts that post fake news.
Facebook has also taken a slew of actions over the years to try and combat the spread of fake news on its platform. In late 2016, the social media company partnered with fact-checking organizations and reconfigured its news feed algorithm to flag fraudulent content. In January 2018, it further changed its algorithm so that users would see more posts from friends and family and less from news or video publishers. The move, however, has been panned by some as exacerbating the fake news problem rather than solving it.
Facebook has also since released a two-question survey to some of its users to determine which news sources they trust. Yet some have questioned how the social media company can combat fraudulent content by polling its audience using a simple survey, and Facebook hasn't been clear on how it intends to use the survey data.
Of course, social media companies like Facebook are likely sensitive to how they handle fake content, lest they alienate different communities. Facebook has already come under fire for allegedly suppressing news stories that expressed conservative views. The social media company has also been rocked by criticism over how it handled fake accounts linked to Russian actors trying to influence the 2016 U.S. presidential election, and the harvesting of personal data by Cambridge Analytica, which is linked to the 2016 presidential campaign of Donald Trump.
For most social media companies, there is likely also the concern that exercising editorial discretion too broadly may stifle the free flow of communication and ideas. As Hermes explained, “It can be difficult to tell the difference between what's a false statement of fact and what is an opinion in terms of what is colloquially labeled as fake news.”
What's more, though Section 230 offers social media companies broad discretion when curating or removing third-party content, they still may want to hedge against the law—for instance, by making sure they define “fake news” in such a way that, even if they lost Section 230 immunity, they would still be free of any liability for removing it.
“There is a very strong First Amendment opinion from the U.S. Supreme Court in U.S. v. Alvarez, which talks about whether or not falsity in and of itself can be constitutionally restricted, and [the court] eventually says that if there [is] no specific identifiable concrete harm that flows from it, then the First Amendment still protects falsity,” Hermes said.
“And so you really need to define what you mean by fake news; you need to figure out what harm is flowing form it. It's not enough to just say this stuff is deleterious to our culture,” he noted.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllRussia-Linked Deepfakes Are Hitting the US Election. Will It Spur Congress to Act?
AI Gives Legal Departments New Leverage to Demand Speed, Efficiency From Law Firms
3 minute readTrending Stories
- 1Infant Formula Judge Sanctions Kirkland's Jim Hurst: 'Overtly Crossed the Lines'
- 2Abbott, Mead Johnson Win Defense Verdict Over Preemie Infant Formula
- 3Guarantees Are Back, Whether Law Firms Want to Talk About Them or Not
- 4Trump Files $10B Suit Against CBS in Amarillo Federal Court
- 5Preparing Your Law Firm for 2025: Smart Ways to Embrace AI & Other Technologies
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250