In a press issued statement, Susan Wojcicki, CEO of YouTube claims 98% of the videos it removes for violent extremism are flagged by machine-learning algorithms, as the social video channel aims to improve ad transparency over the next 12 months.
The announcement follows another round of controversy around the social video platform, following recent reports that global advertisers such as Cadbury, Adidas and Mars removed their ads amid concerns that ads are being served against exploitative content featuring children.
In light of this, Wojcicki admitted the “problematic content” needs working on with YouTube’s main goal being to stay ahead of bad actors, making it difficult for extremist content to surface or remain on the video channel at all.
“I’ve seen up-close that there can be another, more troubling, side of YouTube’s openness,” said Wojcicki, “I’ve seen how some bad actors are exploiting our openness to mislead, manipulate, harass or even harm.
“Now, we are applying the lessons we’ve learned from our work fighting violent extremism content over the last year in order to tackle other problematic content.”
Efficiency
In June, YouTube deployed its machine learning algorithms in a bid to efficiently remove inappropriate content that would otherwise compromise their guidelines.
Since taking action, YouTube claims to have removed over 150,000 videos for violent extremism, taking down nearly 70% of content within eight hours of upload and flagged content that would have taken 40 working hours a week to assess.
“Because we have seen these positive results, we have begun training machine-learning technology across other challenging content areas, including child safety and hate speech.” Wojcicki concluded.