YouTube removed nearly 11.4 million videos through its algorithm over the past three months as inappropriate. After TikTok announced the disappearance of thousands of videos and accounts of its service for considering that they shared inappropriate content, now it is YouTube that has presented figures in this regard. The video portal owned by Google has just released its transparency report for the second quarter of the year, in which it has reported that it has removed a considerable number of videos. According to the aforementioned transparency report, YouTube has eliminated during the last three months about 11.4 million videos, a figure Iran Mobile Database that is higher than the 9 million contents registered during the same period of the previous year. According to the platform, this increase is closely related to the control measures on the information shared around issues related to the health emergency that the world still suffers. To achieve this end, the video service opted to rely on greater determination on its algorithms for the automatic detection of inappropriate or prohibited content within its service, a situation that far exceeded the manual work done on this task. In this way, between April and June of last year, 1,551,051 videos were manually deleted from non-automated tagging. On the other hand, from April to June 2020, only 552,062 were eliminated using this methodology.
In this sense, it is indicated that 33 .5 percent of the deleted content was related to issues of protection for minors, while 28.3 percent of them were related to spam, misleading advertising, and false information. With this cleaning that YouTube now announces where its automatic systems played a fundamental role, the platform expects to receive a greater number of appeals from content creators, who is more than one situation have considered that the elimination of their content is unfair . In this way, the video site affirmed that it increased the number of employees dedicated to the manual review of publications to resolve claims in the shortest possible time. Although YouTube’s measure (which can well be read as eliminating valid Brother Cell Phone List content and penalizing users who did nothing wrong until proven otherwise) might seem risky and too extractive, the truth is that it is a movement that seeks to protect their advertising business, a couple of years ago received an interesting and great wake-up call from advertisers who did not want to see their ads near inappropriate content. During 2018, more than 300 brands, including Adidas, Amazon, Facebook, Hershey, LinkedIn, Mozilla, Netflix, Under Armor, and Netflix, were affected after their ads on YouTube were placed on channels related to Inappropriate content related to pro-white nationalist sites,
Nazis and other information classified as extremist. For the platform, this resulted in a call for a boycott where hundreds of companies decided to stop, at least temporarily, their investments in the service. The collects were none other than the reading of the final consumer. In the UK alone, a recent study by Broadband Genie found that 70 percent of consumers rate ads that appear alongside sexually explicit content as inappropriate, while 69 percent do the same when it comes to sexually explicit content. racist character, 66 percent regarding violent topics, and 65 percent with extremist materials. Even more worrisome is recognizing that three-quarters of consumers confess that they would deliberately postpone the purchase of products or the contracting of services from brands whose advertising was located (intentionally or not) near inappropriate content.