Social Media Cube Credit: Stanislaw Mikulski/Shutterstock.com
|

The internet has transformed the ways in which we access and consume information and communicate with each other, but the rise of online platforms comes with its challenges. Tech companies are facing increasing pressure to tackle disinformation, online harassment and other illegal or objectionable content on their platforms and they are turning to AI to help manage the process.

|

Viral Videos

Businesses agonize over the elusive formula for creating viral content, whether it's music videos such as Luis Fonsi's Despacito ft. Daddy Yankee, or unlikely YouTube sensations such as Baby Shark Dance. Tech giants, on the flip-side, need to be equipped to control the speed at which harmful content can be disseminated on their platforms.

There is increasing pressure on tech companies to develop and invest in ways to prevent, detect and respond to illegal and objectionable content on their platforms. While teams of human moderators are involved in reviewing online content, recently the focus has shifted to AI solutions to automate this process. But is AI the answer to online content moderation?

|

Teamwork Backed by AI Tools

Tech companies have teams of people to tackle the task of content moderation on their platforms. But people alone can't keep up with the scale of content shared online. For example, each day around 576,000 hours of content are uploaded to YouTube, with its users watching more than a billion hours of videos. There has also been public concern for the wellbeing of staff employed to review extreme and distressing images, some of whom have suffered from negative effects to their mental health.

To manage the volume of content and the speed at which it is disseminated, tech companies have turned to AI solutions to automate the analysis and classification of uploaded content to create blacklists so that future attempts to upload the same content are blocked. The situation is often likened to "whack-a-mole"; tech companies make considerable efforts to take down content, but it's often edited to avoid being blocked and reposted by other users online.

Live streaming presents its own challenges. Many AI solutions may not detect the characteristics of content instantly, resulting in it being streamed successfully for a period of time before being taken down. But capabilities are advancing fast. Last month, Facebook released a new state-of-the art software that translates speech into text in real time with improved accuracy which will enhance Facebook's policing of live footage on the platform.

|

Challenges

While tech companies grapple with monitoring and policing content on their platforms, there are some important challenges that AI systems will need to overcome:

Training Data: AI tools need to be trained to detect specific types of content. This requires large quantities of data containing examples. As a result, AI tends to be better trained at detecting the types of content that are regularly in circulation, which leaves a knowledge gap for rarer types of content. Due to this, there is an increasing focus on ensuring the diversity and transparency of training data sets.

Context and Discretion: Moderation decisions are often very complex. Legal frameworks are different depending on the jurisdiction and cultural variances can significantly impact whether content is considered objectionable or not. AI algorithms can master classifying certain characteristics, but often struggle to recognise human concepts such as sarcasm and irony, or that content can be more or less objectionable depending on the identity of the party posting it online. As the context is often crucial, tech companies still rely on human judgment to be the final arbiter of content moderation decisions on their platforms.

Press Freedom and Democracy: As tech companies come under increasing pressure to moderate their platforms, there is concern that the removal of content based on AI decision-making could, in some circumstances, lead to censorship or a hampering of press freedom. On the other hand, AI bots have reportedly been used to influence public opinion by spreading propaganda and disinformation during election campaigns. Tech giants have cited AI as the solution to combat this which raises the question of whether the best way to fight AI is by using more AI.

Responsibility and Governance: Online platforms are responsible for their own content moderation governance. However, organisations provide varying degrees of transparency on their algorithms and decision-making processes, leading to concern that the use of AI could result in certain types of social bias or inequality. Previously, Google announced a set of AI Principles to guide the responsible development and use of AI. Last year, Google also announced the launch of its Advanced Technology External Advisory Council (ATEAC) to consider some of Google's most complex challenges that arise under its AI Principles, which lasted just one week before Google issued a statement confirming that it intended to "find different ways of getting outside opinions on these topics". Facebook is also bolstering its AI governance framework and has just announced the launch of its independent oversight board consisting of a group of experts with the power to review and reverse decisions by its teams of content moderators. This is intended to be operational within a few months.

Taken together, AI clearly has a critical role to play to protect the credibility and reputation of tech companies by keeping their users safe online. However, the associated challenges suggest that while AI is an important part of tech companies' armoury in online content moderation, it may not be a panacea to harmful online content, at least in the short term. There is no one-size-fits-all solution to the ever changing landscape of content moderation but it seems that tech giants are focussed on finding the right combination of sophisticated technologies and human oversight to ensure that their platforms are policed efficiently and effectively.

Phil Sherrell heads Bird & Bird's international Media, Entertainment & Sport sector and focus on litigating disputes relating to content and brands. Esme Strathcole is an Associate at Bird & Bird.