Jacinda Ardern ‘opinion’ in NY Times

An opinion piece from Jacinda Ardern has been published in the New York Times. This isn’t available from the official Beehive news release website, so I presume it’s intended as a message to the world rather than to the people of New Zealand.

Her aim (as stated) is not as some people claim, to shut down free speech or to stop critics from speaking. There is absolutely no evidence as some claim that Ardern is fronting some sort of UN conspiracy to take over the world and subjugate the world population.

She says:

Our aim may not be simple, but it is clearly focused: to end terrorist and violent extremist content online. This can succeed only if we collaborate.

The vast majority of us, nearly all of us, are not terrorists or violent extremists, so we hopefully have little to fear from what she is trying to achieve internationally.

A terrorist attack like the one in Christchurch could happen again unless we change. New Zealand could reform its gun laws, and we did. We can tackle racism and discrimination, which we must. We can review our security and intelligence settings, and we are. But we can’t fix the proliferation of violent content online by ourselves. We need to ensure that an attack like this never happens again in our country or anywhere else.

Of course it is up to us here in New Zealand to engage with discussions over free speech and hate speech and terrorism and extremism and attempts to promote violence online, to help ensure that social media regulations are intended for the extreme minority and shouldn’t affect the rest of us.


Social media needs reform. No one should be able to broadcast mass murder.

By Jacinda Ardern
Ms. Ardern is the prime minister of New Zealand.

At 1:40 p.m. on Friday, March 15, a gunman entered a mosque in the city of Christchurch and shot dead 41 people as they worshiped.

He then drove for six minutes to another mosque where, at 1:52 p.m., he entered and took the lives of another seven worshipers in just three minutes. Three more people died of their injuries after the attack.

For New Zealand this was an unprecedented act of terror. It shattered our small country on what was otherwise an ordinary Friday afternoon. I was on my way to visit a new school, people were preparing for the weekend, and Kiwi Muslims were answering their call to prayer. Fifty men, women and children were killed that day. Thirty-nine others were injured; one died in the hospital weeks later, and some will never recover.

This attack was part of a horrifying new trend that seems to be spreading around the world: It was designed to be broadcast on the internet.

The entire event was live-streamed — for 16 minutes and 55 seconds — by the terrorist on social media. Original footage of the live stream was viewed some 4,000 times before being removed from Facebook. Within the first 24 hours, 1.5 million copies of the video had been taken down from the platform. There was one upload per second to YouTube in the first 24 hours.

The scale of this horrific video’s reach was staggering. Many people report seeing it autoplay on their social media feeds and not realizing what it was — after all, how could something so heinous be so available? I use and manage my social media just like anyone else. I know the reach of this video was vast, because I too inadvertently saw it.

We can quantify the reach of this act of terror online, but we cannot quantify its impact. What we do know is that in the first week and a half after the attack, 8,000 people who saw it called mental health support lines here in New Zealand.

My job in the immediate aftermath was to ensure the safety of all New Zealanders and to provide whatever assistance and comfort I could to those affected. The world grieved with us. The outpouring of sorrow and support from New Zealanders and from around the globe was immense. But we didn’t just want grief; we wanted action.

Our first move was to pass a law banning the military-style semiautomatic guns the terrorist used. That was the tangible weapon.

But the terrorist’s other weapon was live-streaming the attack on social media to spread his hateful vision and inspire fear. He wanted his chilling beliefs and actions to attract attention, and he chose social media as his tool.

We need to address this, too, to ensure that a terrorist attack like this never happens anywhere else. That is why I am leading, with President Emmanuel Macron of France, a gathering in Paris on Wednesday not just for politicians and heads of state but also the leaders of technology companies. We may have our differences, but none of us wants to see digital platforms used for terrorism.

Our aim may not be simple, but it is clearly focused: to end terrorist and violent extremist content online. This can succeed only if we collaborate.

Numerous world leaders have committed to going to Paris, and the tech industry says it is open to working more closely with us on this issue — and I hope they do. This is not about undermining or limiting freedom of speech. It is about these companies and how they operate.

I use Facebook, Instagram and occasionally Twitter. There’s no denying the power they have and the value they can provide. I’ll never forget a few days after the March 15 attack a group of high school students telling me how they had used social media to organize and gather in a public park in Christchurch to support their school friends who had been affected by the massacre.

Social media connects people. And so we must ensure that in our attempts to prevent harm that we do not compromise the integral pillar of society that is freedom of expression.

But that right does not include the freedom to broadcast mass murder.

And so, New Zealand will present a call to action in the name of Christchurch, asking both nations and private corporations to make changes to prevent the posting of terrorist content online, to ensure its efficient and fast removal and to prevent the use of live-streaming as a tool for broadcasting terrorist attacks. We also hope to see more investment in research into technology that can help address these issues.

The Christchurch call to action will build on work already being undertaken around the world by other international organizations. It will be a voluntary framework that commits signatories to counter the drivers of terrorism and put in place specific measures to prevent the uploading of terrorist content.

A terrorist attack like the one in Christchurch could happen again unless we change. New Zealand could reform its gun laws, and we did. We can tackle racism and discrimination, which we must. We can review our security and intelligence settings, and we are. But we can’t fix the proliferation of violent content online by ourselves. We need to ensure that an attack like this never happens again in our country or anywhere else.

 

 

We need to think outside the legal square to deal with ‘hate speech’

I think that relying on legislation and the courts to deal with ‘hate speech’ issues may be largely futile. Laws and courts poorly suited to dealing with most online ‘hate speech’

We already have laws that deal with abusive speech and incitement – since the Christchurch mosque massacres there have been a number of arrests, with several people remanded in custody. While this has picked up on some of the more extreme examples and may have sent a warning message to others there has been a quick resurgence in derogatory and divisive speech online.

Just waiting for the police and courts to deal with the worst is not likely to be much of a solution.

I don’t think that widening the laws to make less serious ‘hate speech’ illegal and subject to prosecution is a practical approach.

One problem is what speech justifies prosecution. Another is who gets to decide.

And with the speed at which speech circulates online the legal system is generally far too slow to react, and even slower to deal with it.

Confronting and ridiculing have been suggested as ways of dealing with ‘hate speech’. To be effective this has to be fast and fact based.

Perhaps something like the Press Council or Broadcasting Standards Authority could be set up, but geared for rapid response – combating bad speech with good speech.

This could involve research so that common ways of replicating divisive and derogatory speech (and there are common patterns and techniques for some of it).

A website as a source of fact based rebuttals would be useful.

This could be Government funded but non-political and non-legal, but with an ability to refer the worst cases to the police.

I think we have to be thinking outside the legal square in looking at ways to deal with this. Some legislative tweaks may be warranted, but the main problems probably (and should) fall short of being made illegal.

Well meaning waffle on free speech v hate speech from two MPs

The opinions from National Kaikoura MP Stuart Smith and Labour list MP Priyanca Radhakrishnan from Stuff:  The delicate balance of free speech v hate speech

Stuart Smith

Finding a means to restrict free speech by legislating against “hate speech” – the modern version of blasphemy – is something that must be carefully considered.

Hate speech and harmful extremism, on any platform, should not be tolerated, and we are right to scrutinise the role of social media in the context of the Christchurch tragedy, and the wider issue of extremism and hate speech.

Our legal system has recourse for those seeking to incite violence.

If we legislated against hate speech, who would decide what constitutes hate speech?

I think that this is one of the primary concerns.

That is why I believe that some of the calls to regulate free speech through legislation may go too far. There is a risk that, as with many difficult and potentially harmful issues that society grapples with, attempts to regulate social media too heavily will drive groups underground to the “dark web”.

Or talk about it in private away from the Internet. That probably happens already.

This issue is not new. Professor Paul Spoonley, Pro Vice-Chancellor at Massey University, was to give a very timely public lecture called The Politics of Hate in the Age of the Internet on March 19, but this was postponed.

He states that research points to a significant spike in online hate speech since 2017.

I agree with his argument that we should not allow it to become normalised and that this is not simply about legislation, but public awareness and discussion.

It is up to all New Zealanders to keep this important conversation going.

Priyanca Radhakrishnan

Access to the internet along with social media platforms has undeniably changed the way we consume information.

While discrimination and hate may have always existed, they now have new, powerful distribution channels.

Social media platforms have served to amplify hate speech across the world. It can be difficult to get rid of harmful messages and hateful comments on social media. Even if the original comment or post is deleted, someone, somewhere, could have already copied and shared it.

It’s time for social media platforms to act to prevent the spread of hate.

Major social media platforms like Facebook, Youtube and Twitter have poor records at preventing ‘hate speech’. They are too concerned about making money.

In the wake of March 15, we have the opportunity to confront the racism, xenophobia and hate that is proliferated through these channels. As individuals, we are each responsible for what we put out there on the internet, and as a country, there is an appetite for change.

As individuals we also have a responsibility to challenge harmful crap speech, both online and in our offline lives.

The Government believes the best and most enduring way to ensure change is to act collaboratively with other governments and with social media companies. The problem is global so the solution needs to be too.

The Prime Minister has committed to New Zealand playing a leading role in this change and Kiwis can expect to see more details of our plans in this space in the coming weeks.

I doubt that relying on international companies and other governments is going to deal with this adequately.

We need to come up with local means of dealing with global problems.

I am disappointed with these contributions to the discussion by Radhakrishnan and Smith. They may help keep the discussion going but they haven’t added much if anything themselves.

“It’s time to have a conversation about our hate speech laws”

Green MP Golriz Gharahman has been busy on Twitter encouraging “a conversation about our hate speech laws”.

She has first hand knowledge of hate speech, having been on the receiving end of awful attacks online.

This has also been promoted by the Green Party.

We now know that hate speech allowed to grow and be amplified online is undermining democracy around the world.

In New Zealand we know that it can be fatal.

The Bill of Rights Act protects free speech, but it’s balance against all our other rights. Our laws already protect individuals against harmful speech. You can’t threaten people. You can’t harm their reputation.

This isn’t really very accurate. The current laws don’t protect us, they give us some means of doing something about being threatened or having our reputations being harmed, but these means are usually far too slow and too inexpensive.

The police will only act on alleged threats if they thing there is a risk of serious harm.

Defamation proceedings are lengthy (Blomfield v Slater has taken six years so far to find that later had no defence, but damages are unlikely to be determined for another year or so) and very expensive. Most people can’t afford to protect their reputations via our current laws.

If defamation against individuals is already illegal, why should people be allowed to harm minority groups.

Including major minority groups?

What constitutes ‘harm’ is contentious and difficult to define. It can range from perceived hard feelings to escalation to actual physical harm.

Most new Zealanders would be shocked to find that our hate speech laws don’t cover religious minorities. They don’t cover gender, the Rainbow community, or the disabilities community.

All religions are minorities. There re no single ‘communities’ of Rainbow or for people with disabilities.

We need to change that.

We must make New Zealand the kind of place where we all feel truly safe and at home.

We certainly should work to change things for the better when it comes to speech.

But is it possible for everyone to feel ‘truly safe’ from hurt, while also feel truly safe to openly say what we think?

From follow up tweets:

You definitely shouldn’t be allowed to spread hate against a protected group based on your religion. Having well defined hate speech laws that assert equal protection for everyone’s rights and safety would do just that.

Deciding on “well defined hate speech laws that assert equal protection for everyone’s rights and safety” will be very challenging. And equal protection means there should not be specified ‘protected groups’.

It’s frightening that LGBTQIA communities aren’t protected against hate speech in NZ given the very real violence that translates to. Why is it unlawful to speak harmful mistruths about an individual and not a group?! Definitely time to realise we’re behind on this one.

Violent and intolerant language can contribute to actual physical violence – but a lot of harm can be done just with words.

A lot more tolerance of minority races, ethnicities, nationalities, political preferences, religions, gender and sexual preferences would be a major step forward.

But alongside this there must be some tolerance of speech that some people may feel uncomfortable with or offended by – it is common to hear people saying they hate opinions that differ from their own.

We need to have more than just conversations about how we address harmful speech, we need to have a robust debate about the balance between potentially harmful speech, and the freedom to speak in a normal and socially acceptable way.

Justice Minister says hate speech laws ‘very narrow’ with gaps

Minister of Justice Andrew Little has said that New Zealand hate speech laws are too narrow and there were gaps in the law, but also said that any changes needed to be robustly debated.

RNZ:  Current hate speech law ‘very narrow’ – Justice Minister Andrew Little

Justice Minister Andrew Little says gaps exist in current laws around hate speech and what should be considered an offence.

Mr Little announced on Saturday that he was fast-tracking the review, which could see hate crimes made a new legal offence.

Mr Little told Morning Report today the current law specific to hate speech offences was “very narrow”.

“It applies to inciting racial disharmony, it doesn’t relate to expressions that incite discrimination on religious grounds or identity or a range of other grounds.”

“If you look at the Harmful Digital Communications Act, which is the other law we have dealing with what we might describe as hate speech, it’s very thorough but the question is whether the processes that are available under that legislation are as accessible and as good as they might be, so there’s grounds to review both those areas,” he said.

On who is covered under current law, Mr Little said: “If your hateful expressions and hateful actions are directed at somebody’s religion, or other prohibited grounds of discrimination other than race then actually it doesn’t cover that, there’s no offence at that point.”

He said you could potentially lay a complaint for mediation with the Human Rights Commission, but that the most gross type of expression seen around the Christchurch terror attacks wouldn’t be covered by it and that looked like there was a gap in the law.

He said the review would make clear whether the law does fit. He’s not convinced it does, but said he’ll leave it up to the experts doing the review.

Mr Little said the issue about where the line was drawn was the most difficult part of any law that constrains expression and speech.

“The reality is we know that there are forms of expression on social media and elsewhere that you can see at face value are totally unacceptable and not worthy of defence but then there are opinions and views that we might disagree with or might even find offensive but are legitimate contributions to debate.”

Mr Little said any change to the law would need to be robustly debated.

I’m sure any suggested changes will be robustly debated.

Gordon Campbell (Werewolf) on the legal crackdown on hate crimes

Obviously, deterring hate speech and outlawing hate crime has the aim of providing better protections to vulnerable persons and communities, but without unduly restricting the public’s rights to free expression. It isn’t an easy balance to strike.

Hate crimes have a broader effect than most other kinds of violent crime. A hate crime victimizes not only the immediate target but also impacts every member of the group that the direct victim represents. Hate crimes affect families, communities, and sometimes the entire nation.

With hate speech, it is maybe worth keeping in mind that this is not purely a hate crime vs free speech issue. Speech has never been entirely free, under the law. Some language (obscenity) some speech in some contexts (eg yelling “fire” in a crowded theatre) and some types of threat have always been illegal.

Theoretically, the online expression of hate speech should fall under the Harmful Digital Communications Act, but given (a) the superheated and extravagant nature of much “normal” online debate and (b) the extent to which hate content online originates from offshore, the New Zealand law doesn’t currently offer much in the way of a defensive shield.

Moreover, regulating speech online to the point where hate speech and/or the perception of it was entirely eliminated would require a surveillance apparatus and enforcement powers like those more commonly found in totalitarian states than in social democracies. Online, the cure may be almost as mad as the disease.

It could easily be worse if allowed to go too far in restricting speech.

To me hate is a very strong term, but many people say they ‘hate’ many trivial things.

With hate crime, and hate speech then, there may well be some scope for adjusting the boundaries of what counts as “intimidation” – where co-ercion is involved or implied – and “menacing”, where the intention is to engender fear and subservience in the victim. Unfortunately though, when Parliament has tried to deal with this sort of thing in the recent past, ordinary civil liberties have gone out the window in favour of rank political posturing.

Political posturing is a problem in any serious debate.

As Andrew Little has said, we have until December to find viable ways to criminalise expressions that (currently) do not meet the traditional tests of criminality – but which nevertheless have left vulnerable communities or persons feeling less safe. (Arguably, the repeated expression of hostile sentiments can serve to make an actual attack more likely.)

Any pre-emptive law however, which tries to restrict expression in areas where strong social disagreement exists will still need to be even-handed.

Putting that in context of recent discussions, that means restrictions on derogatory expressions related to religion would have to be ‘even handed’ – so should apply equally to ‘hate speech’ against Muslims and Islam, Christians and Christianity, and also agnostics and atheists.

This requirement may not suit groups that feel they have historical grievances, or socio-economic inequality etc on their side.

As the late US justice Antonin Scalia once famously wrote, the state has no authority to license one side of a debate to fight freestyle, while requiring the other to follow Marquis of Queensbury Rules. That’s one of the ironies.

The pressure for change may have to do with expressions of hostile content, but the solutions – if they are to be enforceable – will probably need to be formulated in ways that are content neutral. There will be few easy political points to be scored from such formulations.

The free speech versus hate speech debate is more than political – it is about the fundamentals of democracy as well as the fundamentals of a (relatively) free and open society.