New Zealand’s prime minister, Jacinda Ardern, landed in Paris this week with a simple message: the terror attack on Muslim worshippers in the city of Christchurch must be a catalyst for big tech companies to change. The attack in March, the worst in New Zealand’s history, was live-streamed on Facebook and images and videos of it were widely shared in the hours and days afterwards. Ardern says she saw part of the attack herself when a video of it began playing automatically on her Facebook feed.
In Paris, as co-hosts of a summit on curbing online extremism, Ardern and the president of France, Emmanuel Macron, launched the “Christchurch Call,” an initiative that aims to make social media giants recognize the real-world harm caused by their virtual platforms. Participants are asked to sign a pledge to crack down more forcibly on the sharing of such terror propaganda and disclose more about how their software identifies and shares such material.
But the pledge is just the beginning. The sheer reach of companies like Facebook, Twitter and Google has brought them to the attention of – and into conflict with – governments around the world, from New Delhi to Moscow to Washington. More than one country has put tech companies on notice that change is coming. But what form it will take is still far from clear.
So far, the proposals fall into three categories: technological solutions, addressing policy and public pressure. Tech companies have preferred the first because it allows them, not politicians, to control the process. The Christchurch Call starts with public pressure on media giants to sign up to and abide by a public pledge, hoping that such an open commitment combined with more transparency from the tech giants will bring about the necessary change. However, the pledge is non-binding and while he has given it his full backing, Macron favors much deeper policy change.
Last year, France passed a law that allows judges to force the removal of fake news from tech platforms, though it only applies in the three months before an election. President Macron wants to go further, regulating big tech companies in the same way as banks, with regular audits. The UK currently is consulting on new online safety laws that could, in extreme cases, see social media companies hit with large fines or blocked entirely if they don’t take sufficient steps to tackle harmful content. Even in the US, the debate is shifting: only last week a co-founder of Facebook explicitly called for the company to be broken up, as other monopolies in the US have been in the past.
Taken together, these developments suggest that even in liberal democracies there is exasperation over the lack of regulation of tech companies and a strong desire to force them to make changes – and not only because of the way they are used to disseminate terrorist propaganda.
Hate speech has a disproportionate impact on political discourse in democracies and can sway voters. Tech companies have managed to avoid taxation by moving between jurisdictions and they hoover up unimaginable amounts of personal data from citizens of other countries, which ultimately ends up on servers within the United States. These are all strong reasons for governments to target tech companies with legislation.
Yet any change will be hard to implement because the policies will be hard to formulate. Politicians have been slow to define what hate speech or violent extremist content actually is. Even the Christchurch Call pledge doesn’t try, leaving it up to companies to decide what is objectionable material. In places like the US, the line between free speech and hate speech is often a highly politicized judgement. Nor will it be easy to remove terrorists from tech platforms. It may be possible to remove graphic material but differentiating between the militants of Hayat Tahrir Al Sham in Syria planning attacks via WhatsApp and business executives plotting their next business deal in Istanbul is not straightforward. Crafting policy that sets appropriate limits while also maintaining sufficient freedom for users and for tech platforms to innovate will take time, energy and expertise.
There is a fundamental tension here: governments will only push tech companies to change at a time when public pressure is strong, such as after a violent attack, and that, in turn, is when tech companies will most strongly resist any sudden, sweeping changes in the law. It may also be a moment least conducive to conducting nuanced public discussions.
It is more likely, however, that tech companies will have to relent, simply because they are not especially adept at navigating a highly politicized environment. Several major tech companies have already signed up to the Christchurch Call and Facebook’s Mark Zuckerberg flew to Paris last week to discuss plans with Macron (although he did not attend the Christchurch Call summit). Under attack in the US and in Europe, Zuckerberg knows his platform is most in danger.
The Christchurch attack demonstrated once again how online hate, bred and fed online, can suddenly explode into the real world, complete with a live online broadcast that then lives on forever in the dark corners of the internet. Tech companies are being forced to confront the genuine harm that spills over from their virtual world into the real world. The Christchurch Call is part of a broader push from governments, a warning that if tech companies don’t change themselves, others will do it for them – and perhaps very soon.
Faisal Al Yafai is currently writing a book on the Middle East and is a frequent commentator on international TV news networks. He has worked for news outlets such as The Guardian and the BBC, and reported on the Middle East, Eastern Europe, Asia and Africa.