YouTube has released a transparency report that shows a high number of inappropriate content being uploaded, however, automated flagging is speeding up the removal process.
It’s easy for the internet to get cluttered with spam and inappropriate content, which means major clean-up for big internet companies who receive massive amounts of uploads and traffic.
YouTube is one of them — the company has had eight million videos removed in three months.
Seeking more transparency and less spam, Google, which purchased YouTube in 2006, has published an update regarding the ongoing removal of content that violates its policy. The company has released astonishing figures, along with a quarterly report on how Community Guidelines are being enforced.
The eight million videos that have been removed from the popular video sharing platform were “mostly spam or people attempting to upload adult content,” according to Google, “and represent a fraction of a percent of YouTube’s total views during this time period.”
Machines were the first to flag 6.7 million videos and of those, 76% were quickly removed before they received a single view.
These machines are allowing the company to flag content at scale and they claim the technology is paying off in terms of high-speed removals across high-risk, low-volume areas (like violent extremism) and in high-volume areas (like spam).
More than half of all “violent extremism” videos have fewer than 10 views, whereas in the beginning of 2017 that number was eight percent.
Although the deployment of machines may suggest a lesser need for humans, that has not been the case for YouTube. Their systems supposedly rely on human review, and thus the company has been busy hiring.
“At YouTube, we’ve staffed the majority of additional roles needed to reach our contribution to meeting that goal. We’ve also hired full-time specialists with expertise in violent extremism, counterterrorism, and human rights, and we’ve expanded regional expert teams,” stated the company’s official blog.
As for this year’s goals, the brand is committed to bringing the total number of people working on addressing violent content to the grand total of 10,000 across Google. Furthermore, there are plans to refine reporting systems and add additional data, including data on comments, the speed of removal and policy removal reasons.
For anyone interested in reviewing the numbers, here is the transparency report.