5 Takeaways From Tech Leaders' Content Moderation Conference
Tech companies' legal leaders and policy experts gathered at Santa Clara University School of Law on Feb. 2 for a day of panels on content moderation and removal.
February 05, 2018 at 06:18 PM
5 minute read
Tech companies' legal leaders and policy experts gathered at Santa Clara University School of Law on Feb. 2 for a day of panels on content moderation and removal. The school's High Tech Law Institute brought representatives from Facebook, Google, Reddit and others to discuss many facets of content moderation, including mental health, artificial intelligence and transparency.
Here are some big takeaways from a day of discussion featuring some of Silicon Valley's most famous companies:
- Artificial Intelligence Can Remove Content Faster, but Not Better
When it comes to content moderation, context is key. But, as Facebook Inc. public policy manager Neil Potts said Friday, ”Automation is not great at context yet.”
At one panel session, “Humans vs. Machines,” Potts and other panelists agreed that while AI can be useful for black-and-white cases such as posts that endanger or exploit children, a human eye is needed to make most removal decisions accurately.
“It's hard to tell if a review is racist, or if the review is describing a company that was racist,” Yelp Inc. deputy GC Aaron Schur said, providing an example. “[That] requires a human eye for judgment.”
Panelists also noted that changing laws—such as those in Europe, and particularly the U.K.—forcing sites to take down harmful content in a short period of time could force platforms to rely on less accurate, nonhuman content moderators.
2. Human Content Moderators Are Only Human
Spending all day, every workday, looking at graphic images and disturbing posts online takes a toll on mental health. Human content moderators can get worn down from constant exposure to the worst parts of the internet, according to panelists at another session, “Employee/Contractor Hiring, Training and Mental Well-Being.” If moderators feel burnt out, their work may suffer.
Panelists said their companies have a variety of ways to help moderators stay healthy, including counseling and massage therapy.
“We have a lot of different wellness-oriented perks because we really believe that human moderators are the key to having high-quality moderation,” said Charlotte Willner, trust and safety manager of Pinterest. She recommended that companies “invest in their skill set, teach them to become familiar with this type of content, [and] invest in their long-term health.”
3. The Community Knows Best (Kind Of)
During another session, on if and when to outsource moderation, panelists from Reddit, Wikimedia and Nextdoor offered similar advice—turn to the community first. They all said community self-regulation is their sites' most common form of content moderation.
“We let communities decide what the rules are for that community, decide what should go in and out there,” Reddit counsel Zac Cox said. “People who join [the] community can follow the rules and help enforce them by flagging content or commenting in a way that reinforces those norms.”
Many platforms, such as Reddit and Yelp, include features that allow users to up-vote a post or mark it as useful. This is another form of self-moderation, Cox said, as it allows popular content to rise to the top and pushes content the community doesn't want to see down.
But most of the companies said they did have clear processes in place for escalating a content management situation that's too large to be handled by the community alone, or isn't being handled properly.
4. Be Transparent, but Don't Help the Bots
Companies have a fine line to walk with content moderation rules. The guidelines should be clear enough that users know the repercussions of abuse on the platform, but not so clear that users, or bots, can game the system, panelists at the event said.
If users and bots can figure out that harmful content won't be removed unless it contains a specific word or slur, according to panelists, they may continue posting disturbing content that doesn't technically violate guidelines.
When a post does get removed, Patreon Inc.'s head of legal Colin Sullivan said it's crucial that moderators take the time to hear out users' appeals and explain what happened.
“Creators think we're making a fair decision about them, so they feel like we're making fair decisions in general,” Sullivan said.
Sullivan and Medium's head of legal Alex Feerst said transparency becomes more difficult when the user seems to be a bot. Alerting a bot that's account has been shut down or has violated guidelines may push the bot to create another account. In this case, they said, it could be better to isolate the account and not inform the bot.
5. Diversity Is Key
Diversity is important in every part of the company, and that holds true for trust and safety teams. Content moderators should be well-trained, but panelists at Friday's event noted that having moderators from different backgrounds can lead to a better conversation about what is or isn't harmful content.
Feerst said Medium has rotations into the moderator role so that people from around the company can spend time doing trust and safety work.
“Trust and safety and content moderation is a field that, if you don't have gender and ethnic and other [types of diversity], you can't do it,” Feerst said. “Because you don't have enough perspective to generate the cultural competencies, and, most importantly, you don't know what you don't know.”
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllCollectible Maker Funko Wins Motion to Dismiss Securities Class Action
How Tony West Used Transparency to Reform Uber's Toxic Culture
What Paul Grewal Has Learned About Advocacy as Coinbase's Top Lawyer
7 minute readShowered With Stock, Tech GCs Incentivized to 'Knock It Out of the Park'
Trending Stories
- 1More Big Law Firms Rush to Match Associate Bonuses, While Some Offer Potential for Even More
- 2OpenAI, NYTimes Counsel Quarrel Over Erased OpenAI Training Data
- 3Saying Your Goodbyes—Ethical Obligations When Transitioning to a New Firm
- 4Dog Gone It, Target: Provider of Retailer's Mascot Dog Sues Over Contract Cancellation
- 5Lululemon Faces Legal Fire Over Its DEI Program After Bias Complaints Surface
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250