Can AI Content Moderation Keep Us Safe Online?
There is no one-size-fits-all solution to the ever changing landscape of content moderation but it seems that tech giants are focused on finding the right combination of sophisticated technologies and human oversight.
February 18, 2020 at 07:00 AM
6 minute read
|
The internet has transformed the ways in which we access and consume information and communicate with each other, but the rise of online platforms comes with its challenges. Tech companies are facing increasing pressure to tackle disinformation, online harassment and other illegal or objectionable content on their platforms and they are turning to AI to help manage the process.
|Viral Videos
Businesses agonize over the elusive formula for creating viral content, whether it's music videos such as Luis Fonsi's Despacito ft. Daddy Yankee, or unlikely YouTube sensations such as Baby Shark Dance. Tech giants, on the flip-side, need to be equipped to control the speed at which harmful content can be disseminated on their platforms.
There is increasing pressure on tech companies to develop and invest in ways to prevent, detect and respond to illegal and objectionable content on their platforms. While teams of human moderators are involved in reviewing online content, recently the focus has shifted to AI solutions to automate this process. But is AI the answer to online content moderation?
|Teamwork Backed by AI Tools
Tech companies have teams of people to tackle the task of content moderation on their platforms. But people alone can't keep up with the scale of content shared online. For example, each day around 576,000 hours of content are uploaded to YouTube, with its users watching more than a billion hours of videos. There has also been public concern for the wellbeing of staff employed to review extreme and distressing images, some of whom have suffered from negative effects to their mental health.
To manage the volume of content and the speed at which it is disseminated, tech companies have turned to AI solutions to automate the analysis and classification of uploaded content to create blacklists so that future attempts to upload the same content are blocked. The situation is often likened to "whack-a-mole"; tech companies make considerable efforts to take down content, but it's often edited to avoid being blocked and reposted by other users online.
Live streaming presents its own challenges. Many AI solutions may not detect the characteristics of content instantly, resulting in it being streamed successfully for a period of time before being taken down. But capabilities are advancing fast. Last month, Facebook released a new state-of-the art software that translates speech into text in real time with improved accuracy which will enhance Facebook's policing of live footage on the platform.
|Challenges
While tech companies grapple with monitoring and policing content on their platforms, there are some important challenges that AI systems will need to overcome:
Training Data: AI tools need to be trained to detect specific types of content. This requires large quantities of data containing examples. As a result, AI tends to be better trained at detecting the types of content that are regularly in circulation, which leaves a knowledge gap for rarer types of content. Due to this, there is an increasing focus on ensuring the diversity and transparency of training data sets.
Context and Discretion: Moderation decisions are often very complex. Legal frameworks are different depending on the jurisdiction and cultural variances can significantly impact whether content is considered objectionable or not. AI algorithms can master classifying certain characteristics, but often struggle to recognise human concepts such as sarcasm and irony, or that content can be more or less objectionable depending on the identity of the party posting it online. As the context is often crucial, tech companies still rely on human judgment to be the final arbiter of content moderation decisions on their platforms.
Press Freedom and Democracy: As tech companies come under increasing pressure to moderate their platforms, there is concern that the removal of content based on AI decision-making could, in some circumstances, lead to censorship or a hampering of press freedom. On the other hand, AI bots have reportedly been used to influence public opinion by spreading propaganda and disinformation during election campaigns. Tech giants have cited AI as the solution to combat this which raises the question of whether the best way to fight AI is by using more AI.
Responsibility and Governance: Online platforms are responsible for their own content moderation governance. However, organisations provide varying degrees of transparency on their algorithms and decision-making processes, leading to concern that the use of AI could result in certain types of social bias or inequality. Previously, Google announced a set of AI Principles to guide the responsible development and use of AI. Last year, Google also announced the launch of its Advanced Technology External Advisory Council (ATEAC) to consider some of Google's most complex challenges that arise under its AI Principles, which lasted just one week before Google issued a statement confirming that it intended to "find different ways of getting outside opinions on these topics". Facebook is also bolstering its AI governance framework and has just announced the launch of its independent oversight board consisting of a group of experts with the power to review and reverse decisions by its teams of content moderators. This is intended to be operational within a few months.
Taken together, AI clearly has a critical role to play to protect the credibility and reputation of tech companies by keeping their users safe online. However, the associated challenges suggest that while AI is an important part of tech companies' armoury in online content moderation, it may not be a panacea to harmful online content, at least in the short term. There is no one-size-fits-all solution to the ever changing landscape of content moderation but it seems that tech giants are focussed on finding the right combination of sophisticated technologies and human oversight to ensure that their platforms are policed efficiently and effectively.
Phil Sherrell heads Bird & Bird's international Media, Entertainment & Sport sector and focus on litigating disputes relating to content and brands. Esme Strathcole is an Associate at Bird & Bird.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllTrending Stories
- 1Trump Taps Former Fla. Attorney General for AG
- 2Newsom Names Two Judges to Appellate Courts in San Francisco, Orange County
- 3Biden Has Few Ways to Protect His Environmental Legacy, Say Lawyers, Advocates
- 4UN Treaty Enacting Cybercrime Standards Likely to Face Headwinds in US, Other Countries
- 5Clark Hill Acquires L&E Boutique in Mexico City, Adding 5 Lawyers
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250