As the wave of terrorist attacks sweep across Europe and America in recent times, American multinational technology company, Google says it is stepping up its efforts to identify and remove videos related to terrorism and violent extremist content, particularly on its YouTube platform.
Abubakar Shekau, Boko Haram leader, periodically uses threat-filled videos, many of which end up on YouTube.
But the technology company revealed that it used “video analysis models” to find and assess more than 50 percent of the terrorism-related content that were removed in the past six months.
“While we and others have worked for years to identify and remove content that violates our policies,” Google said, “the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done now.”
First, the company says it’s increasing its use of technology to identify videos that contain extremist messages.
Google says it will devote more engineering resources to apply “our most advanced machine learning research to train new ‘content classifiers’ to help us more quickly identify and remove extremist and terrorism-related content”.
The company acknowledges that technology can’t fully solve the problem, so it is also adding 50 expert NGO’s to its YouTube Trusted Flagger programme. Flaggers, the company said, can better identify the difference between violent propaganda and news and that they are more than 90 percent accurate. Google says it already works with 63 organisations as part of the programme.
Google said it would also be taking a “tougher stance on videos that do not clearly violate our policies”. It said videos that “contain inflammatory religious or supremacists content” will appear with a warning. People will not be able to make money off them, or to comment on or endorse them.
“That means these videos will have less engagement and be harder to find,” it said. “We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.”
In a fourth step, Google said it would “expand its role in counter-radicaliwation efforts” through what it calls the “Redirect Method”.
“This promising approach harnesses the power of targeted online advertising to reach potential ISIS recruits and redirects them towards anti-terrorist videos that can change their minds about joining,” Google said..
“In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.”
The steps were first published in an opinion piece Sunday on the Financial Times website and can now be found on a Google blog.
Google’s steps follow a recent Facebook announcement that the social media giant is using artificial intelligence to combat terrorist content.
Earlier this year, the non-profit Southern Poverty Law Centre issued a report critical of organisations like Google and Facebook. The anti-hate group said the companies “have done little to counter the use of their platforms to spread hateful, false “information”, from conspiracy theories accusing various minority groups of plotting against America to websites promoting Holocaust denial and false “facts” about Islam, LGBT people, women, Mexicans and others.”