With an aim to keep control harassment on the video-sharing platform YouTube on Wednesday announced an expansion of its anti-harassment policy. It will ban all the video creators who insult other YouTubers on the basis of race, gender, religion or sexual orientation. The same rule will be implied when on the creators insulting a popular creator, celebrity, politician, or other public figures.
The new policy comes after the company declined to remove videos posted by right-wing commentator Steven Crowder in which he repeatedly insulted Vox video host Carlos Maza. YouTube’s decision backfired it as it resulted in strong public outrage.
Initially, the policy was only applicable on videos in which one creator targets another. But in the latest update includes four major changes to the policy.
First, the policy now covers wide range of threats and personal attacks. Earlier, it banned threats like ‘I’m going to kill you.’ Now, “it not only prohibit explicit threats but also veiled or implied threats. This includes content simulating violence toward an individual or language suggesting physical violence may occur,” policy stated.
Second concentrates on the targeted harassment campaigns. In its policy the company says several creators have talked about pattern of repeated behavior across multiple videos or comments. Under the latest update, YouTube has taken a more holistic view of what a creator is saying in the video. Even if any individual video doesn’t cross company’s policy line, if they still contribute to the persecution of another person, they will be suspended from YouTube Partner Program (YPP). The “company may also remove content from channels if they repeatedly harass someone,” the policy read.
Third, puts a ban on the “content that maliciously insults someone based on protected attributes such as their race, gender expression, or sexual orientation. This applies to everyone, from private individuals, to YouTube creators, to public officials.”
Fourth and the final is that the company is involving a program that uses machine learning to identify offensive comments. It will stick them into a holding pen where creators can choose whether they want the comment to appear under their videos or not. Most of the creators are already using the feature as it is turned on by default since earlier this year.
(With input from agencies)