The company is recruiting thousands of reviewers to reduce the amount of “problematic content” on its video platform. It’s also introducing tougher restrictions on advertising and making greater use of smart technology.
By 2018, Google aims to have more than 10.000 people “working to address content that might violate our policies”, YouTube CEO, Susan Wojcicki said in a blog post. She did not say how many people currently monitor YouTube for offensive videos.
“Some bad actors are exploiting our openness to mislead, manipulate, harass or even harm,” Wojcicki said, adding that YouTube’s trust and safety teams have reviewed nearly 2 million videos for violent extremist content over the past six months.
“We are also taking aggressive action on comments, launching new comment moderation tools and in some cases shutting down comments altogether,” she said.
YouTube has grappled with a series of controversies this year concerning videos available on its platform. It was forced to adopt additional screening measures last month on its kid-friendly platform, YouTube Kids, after reports showed many of the videos there contained profanity and violence.
Wojcicki said YouTube was taking a “new approach to advertising,” with more manual curation, stricter criteria for videos eligible to show ads, and a greater number of ad reviewers.
“We want advertisers to have peace of mind that their ads are running alongside content that reflects their brand’s values,” she said. “Equally, we want to give creators confidence that their revenue won’t be hurt by the actions of bad actors.
The company isn’t just banking on more human intervention, however. Its machine learning algorithms have helped remove more than 150,000 videos from YouTube since June that depict violent extremism.
“Because we have seen these positive results, we have begun training machine-learning technology across other challenging content areas, including child safety and hate speech,” she added.