Together, we can get through this . . .

Preview Google’s AI Web Filtering Algorithm–censorship

Google motto 2004: Don’t be evil
Google motto 2010: Spying on you for the CIA isn’t evil
Google motto 2017: Internet censorship is good for you

According to a Feb, 2017 Wired article, Google introduced a new tool, named “Perspective,” supposedly to help fight online “trolling,” to “keep the web safe” and to “alert users to likely TOXIC content.” According to Google’s “Perspective” website, “Toxic” is defined as “a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion.” You can “try out” Google’s AI “Web Filtering” Algorithm by submitting any phrase to their new AI test page. Try it for yourself!

OFFICIAL GOOGLE YOUTUBE STATEMENT ON MACHINE BASED CENSORSHIP

A little over a month ago, we told you about the four new steps we’re taking to combat terrorist content on Youtube: better detection and faster removal driven by machine learning, more experts to notify us of content that needs review, tougher standards for videos that are controversial but do not violate our policies, and more work in the counter-terrorism space.

We wanted to give you an update on these commitments:

Better detection and faster removal driven by machine learning: We’ve always used a mix of technology and human review to address the ever-changing challenges around controversial content on YouTube. We recently began developing and implementing cutting-edge machine learning technology designed to help us identify and remove violent extremism and terrorism-related content in a scalable way. We have started rolling out these tools and we are already seeing some positive progress:

Speed and efficiency: Our machine learning systems are faster and more effective than ever before. Over 75 percent of the videos we’ve removed for violent extremism over the past month were taken down before receiving a single human flag.

Accuracy: The accuracy of our systems has improved dramatically due to our machine learning technology. While these tools aren’t perfect, and aren’t right for every setting, in many cases our systems have proven more accurate than humans at flagging videos that need to be removed.

Scale: With over 400 hours of content uploaded to YouTube every minute, finding and taking action on violent extremist content poses a significant challenge. But over the past month, our initial use of machine learning has more than doubled both the number of videos we’ve removed for violent extremism, as well as the rate at which we’ve taken this kind of content down.

We are encouraged by these improvements, and will continue to develop our technology in order to make even more progress. We are also hiring more people to help review and enforce our policies, and will continue to invest in technical resources to keep pace with these issues and address them responsibly.

More experts: Of course, our systems are only as good as the the data they’re based on. Over the past weeks, we have begun working with more than 15 additional expert NGOs and institutions through our Trusted Flagger program, including the Anti-Defamation League, the No Hate Speech Movement, and the Institute for Strategic Dialogue. These organizations bring expert knowledge of complex issues like hate speech, radicalization, and terrorism that will help us better identify content that is being used to radicalize and recruit extremists. We will also regularly consult these experts as we update our policies to reflect new trends. And we’ll continue to add more organizations to our network of advisors over time.

Tougher standards: We’ll soon be applying tougher treatment to videos that aren’t illegal but have been flagged by users as potential violations of our policies on hate speech and violent extremism. If we find that these videos don’t violate our policies but contain controversial religious or supremacist content, they will be placed in a limited state. The videos will remain on YouTube behind an interstitial, won’t be recommended, won’t be monetized, and won’t have key features including comments, suggested videos, and likes. We’ll begin to roll this new treatment out to videos on desktop versions of YouTube in the coming weeks, and will bring it to mobile experiences soon thereafter. These new approaches entail significant new internal tools and processes, and will take time to fully implement.

Early intervention and expanding counter-extremism work: We’ve started rolling out features from Jigsaw’s Redirect Method to YouTube. When people search for sensitive keywords on YouTube, they will be redirected towards a playlist of curated YouTube videos that directly confront and debunk violent extremist messages. We also continue to amplify YouTube voices speaking out against hate and radicalization through our YouTube Creators for Change program. Just last week, the U.K. chapter of Creators for Change, Internet Citizens, hosted a two-day workshop for 13-18 year-olds to help them find a positive sense of belonging online and learn skills on how to participate safely and responsibly on the internet. We also pledged to expand the program’s reach to 20, 000 more teens across the U.K.

And over the weekend, we hosted our latest Creators for Change workshop in Bandung, Indonesia, where creators teamed up with Indonesia’s Maarif Institute to teach young people about the importance of diversity, pluralism, and tolerance.

Altogether, we have taken significant steps over the last month in our fight against online terrorism. But this is not the end. We know there is always more work to be done. With the help of new machine learning technology, deep partnerships, ongoing collaborations with other companies through the Global Internet Forum, and our vigilant community we are confident we can continue to make progress against this ever-changing threat. We look forward to sharing more with you in the months ahead.

In other words, you can say what you want, but if we disagree it will never come up in search results, it will not be monetized, and no one can comment. If you post anything at all, make sure it will receive the gold seal of the ADL.

Yeah, this is as bad as everyone was saying. Youtube is for:

1. Funny dog videos.
2. Jewish love stories
3. Car crash and motorcycle stunt videos.
4. Stories of Jewish heroism
5. CNN propaganda.
6. Holocaust videos that say what huge victims the Ashkenazi are
7. Vaccine promotion, and promotion of abortion, antidepressants, the gay lifestyle, transgenderism and, as much as they can get away with it, kiddie porn on private channels.
8. How to make Matzo balls
9. Competition Frisbee throwing
10. How a Jew was victimized last week.

You can forget the allowance of any real truth, if it goes against the flow in a way that is outside of the controlled opposition, it is GONE, the ADL and others will see to it. Better be approved by Snopes as well.

What was wrong with Youtube BEFORE they announced they were going to censor? I never had anything conspiracy related pop up when watching “will it blend” videos. It’s not like they ever allowed a cross mix of topics. There was no reason at all for them to take action against anything that I could see, other than to SHUT PEOPLE UP.

Anyway, their page on this is HERE and it seems they are allowing comments but even if so, they can STICK IT. I wanted to leave a tasty F*** Y** but did not have a throw away account to use on that.

YouTube recently teamed up with the ultra-liberal Anti Defamation League (ADL) for their ‘Trusted Flaggers Program’ to determine what people should and shouldn’t be allowed to watch, as we continue the march towards a nanny state supported by the technocratic left.

Originally organized to combat anti-Semitism, the ADL – with it’s Soros-linked National director who last worked in the Obama admin, now spouts hyperbolic propaganda against conservatives – while failing to apply the same nebulous standards to the left. For example, their recent push to lump all Trump supporters in with white supremacists, while insulating progressives from far-left organizations such as the anti-Semitic black nationalist hate group New Black Panthers, and the increasingly violent Antifa.

Of late, the ADL has published hit-pieces on several conservatives, including Mike Cernovich, Jack Posobiec, Gavin McInnes, and Lucian Wintrich – offering little to no evidence of any actual wrongdoing aside from milquetoast ‘thought crimes’ deemed beyond the pale by their mollycoddled staff. Shouldn’t liberal bigots Linda Sarsour, Luke Kuhn, the New Black Panthers, and the horse-stabbers of Antifa receive the same treatment for their actual advocacy and participation in violence towards those they disagree with?

And now, the liberal propagandist group has been given free reign to censor content on YouTube – such as politically incorrect University of Toronto professor Jordan B. Peterson, who found himself locked out of his YouTube account yesterday with no explanation (Peterson has since regained access).

To that end, Far Left Watch is out with a another report on more selective Orwellian bias from the ADL…

reprinted with permission

—-

On Tuesday, August 1st, the Anti-Defamation League (ADL) issued a press release announcing that they have become a “select contributing member of YouTube’s Trusted Flagger program, created in 2012 to enable organizations to notify the platform of content that violates their community guidelines.” It goes on to say:

“The fight against terrorist use of online resources and cyberhate has become one of the most daunting challenges in modern history,” said Jonathan A. Greenblatt, ADL CEO. “Google has been a leader in this area from the beginning. The reality is extremists and terrorists continue to migrate to and exploit various other social media platforms. We hope that those platforms can learn from and emulate what YouTube is doing to proactively identify and remove extremist content.”

What criteria does the ADL use to define “extremism”? Their Center On Extremism (COE) provides this overview of their efforts but does not discuss the process that they use to distinguish “extremism” or “hate” from protected speech:

ADL’s Center on Extremism is the agency’s research and investigative arm, and a clearinghouse of valuable, up-to-the minute information about extremism of all types—from white supremacists to Islamic extremists.

This announcement is especially concerning considering their recent profile on the Alt-Lite and Alt-Right that lumped in several popular mainstream conservatives with actual neo-nazis and white supremacists. As a response, many prominent conservative personalities took to Twitter and YouTube to criticize the ADL’s hypocrisy and point out how they broadly apply terms like “extremist”, “racist”, “bigot”, etc. to people who simply advocate for conservative political positions. So far, much of the focus has been on the people and organizations that the ADL has labeled as extremists. I want to instead highlight a few violent far left organization that they have not reported on.

Has Redneck Revolt been labeled an extremist organization by the ADL? Has a lengthy profile on their members been circulated through legacy media? Should they worry their YouTube content will be removed as part of YouTube’s Trusted Flagger program? Well, according to the ADL, the answer is no.

Red Guards Austin

Red Guards Austin is an autonomous Marxist-Leninist-Maoist collective based in Austin, TX. Their website contains multiple reports on their confrontational and often armed demonstrations:

“we must seriously take up the task not only of self-defense on the personal and community level, but we must also struggle to unite all genuine antifascists behind the necessity of revolution. Revolution means the long fight for communism and nothing less.”

I’ll ask the same questions. Has this organization been labeled an extremist hate group by the ADL? Has a lengthy profile on their members been circulated through legacy media? Should they worry their YouTube content will be removed as part of YouTube’s Trusted Flagger program? The answer again is a resounding no.

It is unclear what criteria the ADL uses to define “extremism” but considering that they include the numbers “11”, “12”, “13”, and “14”, as well as “Pepe the frog memes” in their Online Hate Symbols Database the bar appears to be incredibly low. So why are the violent far left organizations outlined in this article not mentioned by the ADL? Perhaps we should ask them. Please share this article via Twitter, Facebook, etc. and tag the ADL. Maybe they can offer some clarification.