The decision by YouTube, which is owned by Google, is the latest action by a Silicon Valley company to stem the spread of hate speech and disinformation on their sites.
Until they admit it is the MODEL that they use to get you to videos you “like” is the issue, this is just a tiny little bandaid.
YouTube announced plans on Wednesday to remove thousands of videos and channels that advocate for neo-Nazism, white supremacy and other bigoted ideologies in an attempt to clean up extremism and hate speech on its popular service.
The new policy will ban “videos alleging that a group is superior in order to justify discrimination, segregation or exclusion,” the company said in a blog post. The prohibition will also cover videos denying that violent incidents, like the mass shooting at Sandy Hook Elementary School in Connecticut, took place.
YouTube did not name any specific channels or videos that would be banned.
“It’s our responsibility to protect that, and prevent our platform from being used to incite hatred, harassment, discrimination and violence,” the company said in the blog post.
The decision by YouTube, which is owned by Google, is the latest action by a Silicon Valley company to stem the spread of hate speech and disinformation on its site. A month ago, Facebook evicted seven of its most controversial users, including Alex Jones, the conspiracy theorist and founder of InfoWars. Twitter banned Mr. Jones last year.
The companies have come under intense criticism for their delayed reaction to the spread of hateful and false content. At the same time, President Trump and others argue that the giant tech platforms censor right-wing opinions, and the new policies put in place by the companies have inflamed those debates.
The tension was evident on Tuesday, when YouTube said that a prominent right-wing creator who used racial language and homophobic slurs to harass a journalist in videos on YouTube did not violate its policies. The decision set off a firestorm online, including accusations that YouTube was giving a free pass to some of its popular creators.
In the videos, that creator, Steven Crowder, a conservative commentator with nearly four million YouTube subscribers, repeatedly insulted Carlos Maza, a journalist from Vox. Mr. Crowder used slurs about Mr. Maza’s Cuban-American ethnicity and sexual orientation. Mr. Crowder said that his comments were harmless, and YouTube determined they did not break its rules.
“Opinions can be deeply offensive, but if they don’t violate our policies, they’ll remain on our site,” YouTube said in a statement about its decision on Mr. Crowder.
The back-to-back decisions illustrated a central theme that has defined the moderation struggles of social media companies: Making rules is often easier than enforcing them.
“This is an important and long-overdue change,” Becca Lewis, a research affiliate at the nonprofit organization Data & Society, said about the new policy. “However, YouTube has often executed its community guidelines unevenly, so it remains to be seen how effective these updates will be.”
YouTube’s scale — more than 500 hours of new videos are uploaded every minute — has made it difficult for the company to track rule violations. And the company’s historically lax approach to moderating extreme videos has led to a drumbeat of scandals, including accusations that the site has promoted disturbing videos to children and allowed extremist groups to organize on its platform. YouTube’s automated advertising system has paired offensive videos with ads from major corporations, prompting several advertisers to abandon the site.
The kind of content that will be prohibited under YouTube’s new hate speech policies includes videos that claim Jews secretly control the world, those that say women are intellectually inferior to men and therefore should be denied certain rights, or that suggest that the white race is superior to another race, a YouTube spokesman said.
Channels that post some hateful content, but that do not violate YouTube’s rules with the majority of their videos, may receive strikes under YouTube’s three-strike enforcement system, but would not be immediately banned.
The company also said that channels that “repeatedly brush up against our hate speech policies,” but don’t violate them outright, would be removed from YouTube’s advertising program, which allows channel owners to share in the advertising revenue their videos generate.
In addition to tightening its hate speech rules, YouTube announced it would also tweak its recommendation algorithm, the automated software that shows users videos based on their interests and past viewing habits. This algorithm is responsible for more than 70 percent of overall time spent on YouTube, and has been a major engine for the platform’s growth. But it has also drawn accusations of leading users down rabbit holes filled with extreme and divisive content, in an attempt to keep them watching and drive up the site’s usage numbers.