YouTube is firefighting another child safety content moderation scandal which has led several major brands to suspend advertising on its platform.
On Friday investigations by the BBC and The TimesÂ reported finding obscene comments on videos of children uploaded to YouTube.
Only a small minority of the comments were removed after being flagged to the company via YouTubeâs âreport contentâ system. The comments and their associated accounts were only removed after the BBC contacted YouTube via press channels, it said.
While The Times reported finding adverts by major brands being also shown alongside videos depicting children in various states of undress and accompanied by obscene comments.
Brands freezing their YouTube advertising over the issue include Adidas, Deutsche Bank, Mars, Cadburys and Lidl, according to The Guardian.
Responding to the issues being raised a YouTube spokesperson said itâs working on an urgent fix â and told us that ads should not have been running alongside this type of content.
âThere shouldnât be any ads running on this content and we are working urgently to fix this. Over the past year, we have been working to ensure that YouTube is a safe place for brands. While we have made significant changes in product, policy, enforcement and controls, we will continue to improve,â said the spokesperson.
Also today,Â BuzzFeedÂ reported that a pedophilic autofill search suggestion was appearing on YouTube over the weekend if the phrase âhow to haveâ was typed into the search box.
On this, the YouTube spokesperson added:Â âEarlier today our teams were alerted to this profoundly disturbing autocomplete result and we worked to quickly remove it as soon as we were made aware. We are investigating this matter to determine what was behind the appearance of this autocompletion.â
Earlier this year scores of brands pulled advertising from YouTube over concerns ads were being displayed alongside offensive and extremist content, including ISIS propaganda and anti-semitic hate speech.
Google responded by beefing up YouTubeâs ad policies and enforcement efforts, and by giving advertisers new controls that it said would make it easier for brands to exclude âhigher risk content and fine-tune where they want their ads to appearâ.
In the summer it also made another change in response to content criticism â announcing it was removing the ability for makers of âhatefulâ content to monetize via its baked in ad network, pulling ads from being displayed alongside content that âpromotes discrimination or disparages or humiliates an individual or group of peopleâ.
At the same time it said it would bar ads from videos that involve family entertainment characters engaging in inappropriate or offensive behavior.
This monthÂ further criticismÂ was leveled at the company over the latter issue, after a writerâs Medium post shone a critical spotlight on the scale of the problem.Â And last week YouTubeÂ announcedÂ another tightening of the rules around content aimed at children â including saying it would beef up comment moderation on videos aimed at kids, and that videos found to have inappropriate comments about children would have comments turned off altogether.
But it looks like this new tougher stance over offensive comments aimed at kids was not yet being enforced at the time of the media investigations.
The BBC said the problem with YouTubeâs comment moderation system failing to remove obscene comments targeting children was brought to its attention by volunteer moderators participating in YouTubeâs (unpaid) Trusted Flagger program.
Over a period of âseveral weeksâ it said that five of the 28 obscene comments it had found and reported via YouTubeâs âflag for reviewâ system were deleted. However no action was taken against the remaining 23 â until it contacted YouTube as the BBC and provided a full list. At that point it says all of the âpredatory accountsâ were closed within 24 hours.
It also cited sources with knowledge of YouTubeâs content moderation systems who claim associated links can be inadvertently stripped out of content reports submitted by members of the public â meaning YouTube employees who review reports may be unable to determine which specific comments are being flagged.
Although they would still be able to identify the account being associated with the comments.
The BBC also reported criticism directed at YouTube by members of its Trusted Flaggers program, saying they donât feel adequately supported and arguing the company could be doing much more.
âWe donât have access to the tools, technologies and resources a company like YouTube has or could potentially deploy,â it was told. âSo for example any tools we need, we create ourselves.
âThere are loads of things YouTube could be doing to reduce this sort of activity, fixing the reporting system to start with. But for example, we canât prevent predators from creating another account and have no indication when they do so we can take action.â
Google does not disclose exactly how many people it employs to review content â reporting only that âthousandsâ of people at Google and YouTube are involved in reviewing and taking action on content and comments identified by its systems or flagged by user reports.
These human moderators are also used to train and develop in-house machine learning systems that are also used for content review. But while tech companies have been quick to try to use AI engineering solution to fix content moderation, Facebook CEO Mark Zuckerberg himself has said that context remains a hard problem for AI to solve.
Highly effective automated comment moderation systems simply do not yet exist.Â And ultimately whatâs needed is far more human review to plug the gap. Albeit that would be a massive expense for tech platforms like YouTube and Facebook that are hosting (and monetizing) user generated content at such vast scale.
But with content moderation issues continuing toÂ rise up the political agenda, not to mention causing recurring problems with advertisers, tech giants may find themselves being forced to direct a lot more of their resources towards scrubbing problems lurking in the darker corners of their platforms.
Featured Image: nevodka/iStock Editorial