Google is refusing to take responsibility online, but there should be no ungoverned space when battling terrorists.
No matter how hard Google tries, it can’t escape accusations about its reluctance to deal with extremist content online. Whether it is jihadi videos on YouTube, anti-Semitic search suggestions, or other forms of extremist content freely available on one of its platforms, Google has to accept that extremists are exploiting the Internet.
Up until now, Google has been able to bat away criticism with relative ease, with little to no effective action taken. However it would appear that criticism is growing, and it must respond.
Too often Google absolves itself of any responsibility. The search giant has always strived to be more than a for-profit entity. Rightly, it prides itself on its ability and its success in shaping modern life. And in doing so, it has become part of the fabric of society.
Google is no longer viewed simply as a search engine market leader, and nor does it want to be. With this great status comes a heavy responsibility, one that, thus far, Google has failed to do justice to.
We can’t give extremists a free ride online. We wouldn’t allow this offline, so why allow it online? It’s right that the Internet, and by extension Google, is a relatively ungoverned space. Silicon Valley has flourished under the current conditions.
But we can’t allow extremists to exploit the good nature of Silicon Valley.
Research by the Centre on Religion & Geopolitics into the accessibility of Islamist extremist content on the Google search platform last year found that, rather worryingly, content ranging from the overtly violent magazines of ISIS and al-Qaeda to the more subtle calls for killing apostates and Jews were ranking prominently in Google's search listings. In fact, the research found that there were almost half a million searches conducted every single month for phrases that return extremist content, 54,000 of which were from the UK alone.
Similarly, journalist Carole Cadwalladr found that Google’s own algorithm was suggesting anti-Semitic search phrases, on top of which anti-Semitic websites run by far-right extremists were the first websites that the searches were returning.
Google recently gave itself a pat on the back in a blog post about how it had done a great job in cleaning its ad platform in order to “protect people from misleading, inappropriate, or harmful ads." In fact, the company said it developed an algorithm to systematically purge its platforms from almost 2 billion ads that violated its policies. While this may seem to be an admission of responsibility, ultimately Google has only acted on this because the revenue from advertisements is the core pillar of its financial model.
There is a question to be asked: if Google can invest the time, money, and resources to tackling spurious ads that undermine its policies, why can’t the company act against extremist content?
The question of removing certain content from its listings always makes Google hot under the collar. Of course it doesn’t want to be seen as policing the Internet and what people can access, but in reality the company already does this.
In Germany, legislation dictates that Google results omit anti-Semitic, Nazi content from its search listings, while in Europe more broadly, the Right to be Forgotten law requires the company to remove historic, inaccurate, and misleading listings about individuals should they request them to be removed.
A collective, legislative approach to tackling extremist content online will circumnavigate the age-old debate about net-neutrality and compel the major tech players to play ball. The likelihood of upsetting Google might be high, but would it lead to Google withdrawing its business from such markets? Certainly not.
Every so often Google, along with Twitter and Facebook, are hauled before ministers and committees in the US, the UK, Europe, and elsewhere as the powers that be seek answers on what these corporations are doing to tackle extremist content. There are some colourful responses, ranging from empowering civil society to holding "hackathons" and workshops. This all sounds very good, but the same flaw underscores all of these initiatives; the power to act is handed from the tech companies to the public.
There is no doubt that wider society has its role to play, but only Google, with its technological resources can make a sizable difference.
Google’s algorithm to root out bad adverts is billed as having done a job that would have taken 50 years to do if it was done manually. Surely, it could explore a similar approach to extremism of all kinds that find their way into our search results, whether it be racism, homophobia, anti-Semitism, Islamophobia, or any other kind of hate speech. On the flipside, if Google fails to take action, are we looking at 50 years before this problem is solved?
Actions speak louder than words, and in its own words Google understands the problem and remains committed to a “zero-tolerance policy for content that incites violence or hatred”. Yet the actions that Google has thus far demonstrated show a real reluctance to engage with the issue.
Whether it’s mobile phones, homes, cars, or anything else that Google turns its attention to, the company strives to fundamentally change our lives for the better. However, being a change maker doesn’t negate responsibility. In fact, when you become change makers, your responsibility grows.
Executives at Google should go back to square one, understand what users are searching for, and understand what users are being presented with. We can’t allow extremists a free ride online. There can be no ungoverned spaces when battling terrorists.
This article was first published in The Independent.