Escalating ‘hate speech’ and fake speech concerns

Claimed hate speech continues to feature in political news in New Zealand, while in the US the growing threat of fake speech – or more accurately, falsifying the appearance of speech, raises concerns about what can be believed from video.

It was always contentious trying to define ‘hate speech’, and differentiate it from acceptable speech someone might hate to hear.  It becomes a real problem when people claim that criticism is some form of hate speech to distract from the criticism and try to turn it into a counter attack.

Stuff: ‘Hate speech’ politics row in Rotorua referred to police

A Mayoral candidate has been accused of age, gender and race-based “hate speech”, prompting a confidential council committee to recommended police involvement.

The political race hate stoush is brewing in Rotorua, with councillor Tania Tapsell branding online comments made by Mayoral candidate Reynold​ Macpherson as “totally unacceptable”.

The row centres on an online post made by Macpherson on the Facebook page of the Rotorua District Residents & Ratepayers’ (RDRR) lobby group on May 14.

The post, a response to a video in which Tapsell encourages more young people to stand for council, is entitled “Beware the charismatic pitch of the Pied Piper”.

“He has referenced me as a Pied Piper who lures away vermin and children and this level of hate speech is totally unacceptable,” Tapsell said.

She said the decision to refer the post to police was made by a confidential council committee, and while not at her request, she believes that “given the risk of harm to myself and others that was the right decision”.

“Charismatic pitch of the Pied Piper” doesn’t sound anywhere near close to hate speech to me. It could even be seen as a compliment. ‘Charismatic’ is a positive term.

I’m surprised Tapsell has belayed it as hate speech.

I’m astounded ‘a confidential council committee’ has referred it to the police, unless there is more to this that I am not seeing.

Tapsell, of Te Arawa and Tainui ancestry, also cited “verbal and physical threats” from members of the RDRR, and their opposition to Māori participation in council decision making through the now established Te Tatau o Te Arawa board.

“This post was just one example of his many age,gender and race-based attacks on council members,” she said.

“His rants have gone too far so I’m standing up for all the people who have been offended by his hate speech.”

A Rotorua Lakes Council spokesperson confirmed to Stuff that a complaint had been forwarded to police, “and it is now in their hands”

I don”t see anything age, gender or race-based in the Pied Piper comment. If this comes to anything with the police then our democracy is at risk.

Fake speech

A real concern for the present and future is fake speech – concocted video or audio misrepresenting what was said or how something was said. An example has flared up in the US.

Here yesterday David showed that people have been successfully fooled:

Listen to Pelosi live she literally sounds like she has had a brain injury at worst but certainly well past her use by date. Just as well she is a Democrat so the media dont say anything.

Donald Trump had given authenticity (to those who believe anything he promotes) to a video clip of Nancy Pelosi.

But this is dirty politics. Fox News: Manipulated videos of Nancy Pelosi edited to falsely depict her as drunk spread on social media

Numerous doctored video clips of House Speaker Nancy Pelosi, D-Calif, are spreading on social media, deceptively portraying her as if she were intoxicated.

A three-minute clip of Pelosi speaking event with the Center for American Progress from Wednesday was uploaded on Facebook by a group called “Politics WatchDog” was viewed over 1.8 million times with nearly 40,000 shares. The video shows her frequently slurring her words and her voice sounding garbled. Copies of the clip had also been found on Twitter and YouTube, which the latter had removed.

According to a report from The Washington Post, experts believed the original video was slowed down to 75 percent from the original speed and that her pitch was also manipulated in order to present her under the influence.

Computer science and digital forensics expert Berkeley Hany Farid said there was “no question” the video had been tampered with.

“It is striking that such a simple manipulation can be so effective and believable to some,” he told the Washington Post.

Both Speaker Pelosi and President Trump have exchanged brutal insults at each other on Thursday. Pelosi expressed that she was “concerned” about the president’s “well-being.” Trump shot back, calling her a “mess” and claimed she was “disintegrating.”

Was the video created to attack Pelosi specifically in this stoush?

These days a deceit travels around the world very rapidly, and while it can eventually be debunked the damage can’t easily be undone. Snopes:

On 24 May 2019, a manipulated video that supposedly showed U.S. House Speaker Nancy Pelosi drunkenly slurring her speech was widely shared on social media. One version of the post that the Facebook page “Politics Watchdog” shared was viewed millions of times.

It will still be circulating, and is likely to still be promoted even though it has been proven to be fake video.

This doctored video was shared by thousands of users on social media, including by Rudy Giuliani, President Donald Trump’s counsel, with the caption: “What is wrong with Nancy Pelosi? Her speech pattern is bizarre.” (Giuliani’s tweet has since been deleted.)

Actually Trump’s tweet referred to a second doctored video.

This second video was not doctored in the same overt manner as the slurred-speech footage. Rather, this clip was created by selectively picking a few brief moments in which Pelosi paused or stumbled (totaling about 30 seconds) while answering questions from reporters, and then cobbled them together to give the impression that Pelosi was “stammering” through her news conference.

And Trump’s tweet has not been deleted.

Misrepresenting opponents via doctored video is not new, it has been happening for a long time. But improved video and animation technology, and the speed with which fake speech can be circulated, raises the risks of dirty politics of this type being used more.

So we have contrasting situations where some relatively benign speech is claimed to be far worse than it is. The ‘hate speech’ label has become a form of counter attack, and is itself a threat to free speech when it is used to try to discredit or deter free and open speech communications.

On the other hand there are real risks from doctored speech being used more as it becomes increasingly difficult to differentiate it from the authentic and real speech.

That this thing has been done before is no excuse for it being done on an increasingly sophisticated and rapid manner.

Social media switches attacks to partner of MP, Kiwiblog prominent

Yesterday the social media bash wagon continued attacking Green MP Golriz Ghahraman, but also widened attacks to her partner Guy Williams, by dredging up historic tweets.

David Farrar chose to feed red meat to his baying crowd at Kiwiblog, further inflaming a nasty campaign against Ghahraman

Particularly this one.

Williams is a comedian, but that was a crap joke about Don Brash. Fair enough to criticise it.

But to bring it up nearly two years later to add to the Ghahraman pile on is also crappy.

Ghahrama’s past also keeps being dredged up and misrepresented (more than she misrepresented it herself) – for example I have seen a cropped photo of her and a criminal she was involved in defending as a lawyer.

David Farrar chose to include the two year old tweet in this post David Seymour on free speech – he claimed ” this tweet this morning” even though it is clearly dated September 11 2017, which was before Ghahraman became an MP.

Seymour used strong language about a political opponent (and they are not words I would use) but compare that to this tweet this morning:

Joking about running someone over because you don’t like their politics.

Now don’t get me wrong. I don’t have a problem with Williams’ tweet by itself. But I ask people to imagine this.

Think if the partner of a National MP tweeted about whether they should run over a Green MP. The media would be denouncing it as hate speech and inciting violence.

Ghahraman does have legitimate security concerns, based on the vile messages about lynching her on a private Facebook group. The people responsible should be held accountable.

I think it was particularly poor of Farrar to include this tweet in an op ed by David Seymour that he posted.  He would have known this would have fed Kiwiblog commenters already at times raging rampant over his revised site rules.

Comments on the thread include:

Brian Marshall:

She is a menace to freedom. Huge threat.
If anyone can’t see what David Seymour is referring to, then I suggest they don’t belong in a New Zealand Parliament.
The most disgusting thing is that David Seymour is described as some sort of Nazi, but those proposing Hate Speech laws are acting like Fascists of which Nazi’s are branch.

hullkiwi:

I am in total agreement with you Brian. Her utterances on this topic and other matters are an affront to democracy and with it, she is a menace to democracy.

David Garrett:

Yeah but did she actually get death threats?? Please refer to my comment above… In short, if the polis think you have been credibly threatened they are in there for you…some little snowflake who thinks she’s been threatened: Not so much…

alien:

It is interesting that in a week that a report on bullying etc in parliament we see some of these people and media bullying the leader of the act party. I’m sure we’ve all heard these green mps say far far worse about national mps and a prime minister.

Given the levels of vitriole directed at Gharaman on Kiwiblog over the last few days that’s rather ironic, defending Seymour and implying ‘green mps’ must be far worse (with no evidence given).

Lipo:

As the discussion on Free Speech is being had, I heard Peter Williams this morning say that he thought Hate Speech should be decided by (and only by) the recipient of the intended words. While this has some merit I think this is wrong.
Hate speech should only be defined as “Hate Speech” by the person speaking the words.
It is always what the words meant to say not on how the recipient received them

That’s a novel approach.

I don’t know if Peter Williams is being quoted correctly, but claims like that are ridiculous, and Isee no chance of the scaremongering claims getting anywhere near law.

the deity formerly known as nigel6888:

So a refugee politician who specialises in abusing and baiting anyone who doesnt share her communistic objectives has managed to get a few cretins to abuse her back.

and……….. trumpets……….. she’s the victim!

Utterly remarkable for its predictable banality.

I have seen quite a few cretins claiming to be victims in this debate. Seems to be a common approach these days (prominently used by Donald Trump) – attack, then claim to be the victim.

GPT1:

I do not understand the carry on re. Seymour’s comment. I guess it could be argued that he should have said “her position on this issue is a threat to freedom” but it seemed to be a robust political – rather than personal – rebuttal.

As it happens I agree that Ms Ghahraman’s attempts to regulate free speech have the effect of being an attack on our free society.

‘Attempts to to regulate free speech” have been grossly overstated in this debate. Ghahraman has expressed her opinion, as has Seymour. That is free speech in action.

There is a lot of hypocrisy on this, defending Seymour’s right criticise as he sees fit, but attacking Ghahraman for doing the same thing, trying to shout and shut her down.

Defenders of Ghahraman also come under fire. Wangas Feral:

That Collins and other National women MPs jumped in as White Knights to come to the aid of GG is the most upsetting thing in this whole affair. Making it a gender issue shows that they are no better than the professional victims of the left. Collins has really gone down in my estimation now.

Kiiwiblog has always had a smattering of worthwhile comments amongst the noise. Fentex:

Finding someone representative of something relevant is needed to make the point – ideally DPF wants to find a quote by Golriz Ghahraman representing the position he wants highlighted.

And wouldn’t finding quotes from her supporting Seymour’s position she’s uniquely dangerous go some way to that?

This is what she’s quoted saying…

“it is vital that the public is involved in a conversation about what speech meets the threshold for being regulated, and what mix of enforcement tools should be used.”

…and I think she’s been vilified because that statement takes the implicit position there is speech that must be regulated.

While I beleive people do accept incitement to riot or murder is a crime and is properly outlawed and punishable I think some, and clearly Seymour, suspects Goriz means something altogether more oppressive and intrusive which constitutes a “menace to freedom.”

After all what we all broadly accept as improper speech (incitement to commit crimes etc) is already illegal, so therefore any conversation about new restrictions must be about something else – something not yet illegal.

I think I understand his point, and I suspect many objecting to his attitude misunderstand the subject and have interpreted it in a different context (i.e if they already suspected Seymour of racism they may see different implications and meaning in his statement).

If you keep your eye on the subject and don’t let identities distract you there’s a continual ongoing debate about hateful speech and discussion of what might be done to avoid dangers it engenders*, but please don’t go haring off on tangents about different issues – it doesn’t help and only emboldens those who wish to use tactics of distraction and tribalism.

Maggy Wassilieff:

Ghahraman has made her position clear…
she believes our law does not protect groups identified by gender, sexual orientation, religion, or disability.

https://www.stuff.co.nz/national/politics/opinion/112708601/we-need-laws-with-real-teeth-to-protect-our-online-safety

Ghahraman has stated what she believes, and we should be debating things like that. But we are nowhere near any sort of  legal clampdown on ‘free speech’ that some are claiming.

 

 

Jacinda Ardern ‘opinion’ in NY Times

An opinion piece from Jacinda Ardern has been published in the New York Times. This isn’t available from the official Beehive news release website, so I presume it’s intended as a message to the world rather than to the people of New Zealand.

Her aim (as stated) is not as some people claim, to shut down free speech or to stop critics from speaking. There is absolutely no evidence as some claim that Ardern is fronting some sort of UN conspiracy to take over the world and subjugate the world population.

She says:

Our aim may not be simple, but it is clearly focused: to end terrorist and violent extremist content online. This can succeed only if we collaborate.

The vast majority of us, nearly all of us, are not terrorists or violent extremists, so we hopefully have little to fear from what she is trying to achieve internationally.

A terrorist attack like the one in Christchurch could happen again unless we change. New Zealand could reform its gun laws, and we did. We can tackle racism and discrimination, which we must. We can review our security and intelligence settings, and we are. But we can’t fix the proliferation of violent content online by ourselves. We need to ensure that an attack like this never happens again in our country or anywhere else.

Of course it is up to us here in New Zealand to engage with discussions over free speech and hate speech and terrorism and extremism and attempts to promote violence online, to help ensure that social media regulations are intended for the extreme minority and shouldn’t affect the rest of us.


Social media needs reform. No one should be able to broadcast mass murder.

By Jacinda Ardern
Ms. Ardern is the prime minister of New Zealand.

At 1:40 p.m. on Friday, March 15, a gunman entered a mosque in the city of Christchurch and shot dead 41 people as they worshiped.

He then drove for six minutes to another mosque where, at 1:52 p.m., he entered and took the lives of another seven worshipers in just three minutes. Three more people died of their injuries after the attack.

For New Zealand this was an unprecedented act of terror. It shattered our small country on what was otherwise an ordinary Friday afternoon. I was on my way to visit a new school, people were preparing for the weekend, and Kiwi Muslims were answering their call to prayer. Fifty men, women and children were killed that day. Thirty-nine others were injured; one died in the hospital weeks later, and some will never recover.

This attack was part of a horrifying new trend that seems to be spreading around the world: It was designed to be broadcast on the internet.

The entire event was live-streamed — for 16 minutes and 55 seconds — by the terrorist on social media. Original footage of the live stream was viewed some 4,000 times before being removed from Facebook. Within the first 24 hours, 1.5 million copies of the video had been taken down from the platform. There was one upload per second to YouTube in the first 24 hours.

The scale of this horrific video’s reach was staggering. Many people report seeing it autoplay on their social media feeds and not realizing what it was — after all, how could something so heinous be so available? I use and manage my social media just like anyone else. I know the reach of this video was vast, because I too inadvertently saw it.

We can quantify the reach of this act of terror online, but we cannot quantify its impact. What we do know is that in the first week and a half after the attack, 8,000 people who saw it called mental health support lines here in New Zealand.

My job in the immediate aftermath was to ensure the safety of all New Zealanders and to provide whatever assistance and comfort I could to those affected. The world grieved with us. The outpouring of sorrow and support from New Zealanders and from around the globe was immense. But we didn’t just want grief; we wanted action.

Our first move was to pass a law banning the military-style semiautomatic guns the terrorist used. That was the tangible weapon.

But the terrorist’s other weapon was live-streaming the attack on social media to spread his hateful vision and inspire fear. He wanted his chilling beliefs and actions to attract attention, and he chose social media as his tool.

We need to address this, too, to ensure that a terrorist attack like this never happens anywhere else. That is why I am leading, with President Emmanuel Macron of France, a gathering in Paris on Wednesday not just for politicians and heads of state but also the leaders of technology companies. We may have our differences, but none of us wants to see digital platforms used for terrorism.

Our aim may not be simple, but it is clearly focused: to end terrorist and violent extremist content online. This can succeed only if we collaborate.

Numerous world leaders have committed to going to Paris, and the tech industry says it is open to working more closely with us on this issue — and I hope they do. This is not about undermining or limiting freedom of speech. It is about these companies and how they operate.

I use Facebook, Instagram and occasionally Twitter. There’s no denying the power they have and the value they can provide. I’ll never forget a few days after the March 15 attack a group of high school students telling me how they had used social media to organize and gather in a public park in Christchurch to support their school friends who had been affected by the massacre.

Social media connects people. And so we must ensure that in our attempts to prevent harm that we do not compromise the integral pillar of society that is freedom of expression.

But that right does not include the freedom to broadcast mass murder.

And so, New Zealand will present a call to action in the name of Christchurch, asking both nations and private corporations to make changes to prevent the posting of terrorist content online, to ensure its efficient and fast removal and to prevent the use of live-streaming as a tool for broadcasting terrorist attacks. We also hope to see more investment in research into technology that can help address these issues.

The Christchurch call to action will build on work already being undertaken around the world by other international organizations. It will be a voluntary framework that commits signatories to counter the drivers of terrorism and put in place specific measures to prevent the uploading of terrorist content.

A terrorist attack like the one in Christchurch could happen again unless we change. New Zealand could reform its gun laws, and we did. We can tackle racism and discrimination, which we must. We can review our security and intelligence settings, and we are. But we can’t fix the proliferation of violent content online by ourselves. We need to ensure that an attack like this never happens again in our country or anywhere else.

 

 

Andrew Little on the legal balance between freedom of speech and hate speech

Minister of Justice Andrew Little on freedom of speech versus protecting people from hate speech.

The New Zealand Bill of Rights Act affirms our right to freedom of expression, including the right to impart and receive opinions of any kind. Protecting freedom of speech is crucial to our democracy and the ability of all citizens to participate meaningfully.

But in the immediate wake of the March 15 mosque attacks, many citizens from minority ethnic and religious communities told of how opinions and statements they routinely see on social media and other public platforms make them feel threatened, unwelcome and alienated.

A responsible government must consider these claims, and on a principled basis.

Consequently I have asked the Ministry of Justice to work with the Human Rights Commission to examine whether our laws properly balance the issues of freedom of speech and hate speech. The process should not be rushed, and I expect a report for public comment towards the end of the year.

Drawing the line is not simple. Protecting freedom of speech that challenges authority and orthodoxy will inevitably still cause offence to some.

But just being offensive or disagreeable does not necessarily make something harmful. Controversial issues in New Zealand, such as immigration policy or indigenous rights and reconciliation with the Treaty of Waitangi, will continue to be the subject of public debate. And so they should.

Protecting our crucially important right to freedom of speech, while testing whether the balance is right regarding “hate speech”, needs a robust public discussion from all quarters. This way we will ensure that all of our citizens’ rights are protected, and every person can express their humanity without fear.

Clear definitions of ‘hate speech’ and ‘harmful’ will be crucial. In current law there is quite a high bar to prove ‘harmful’.

Note that this details an examination of whether current laws get the balance right or not. There is no certainty that the laws will be changed or not.

I think that many people have been jumping to conclusions and scaremongering about this.

The best way of dealing with it is to engage in process, to the extent of contributing to rational discussion on whether our current laws are fit for purpose.

Full op-ed at NZ Herald:


Hate speech threatens our right to freedom of speech

OPINION by Andrew Little

Protecting freedom of speech is vital to hold those in authority to account, challenge the socially and culturally dominant, and enable society to progress.

Freedom of speech can give force to new ideas, but also cause discomfort and offence. It is usually the first right to be lost under oppressive regimes, and among the first to be restored, at least in name, after revolutionary change.

The New Zealand Bill of Rights Act affirms our right to freedom of expression, including the right to impart and receive opinions of any kind. Protecting freedom of speech is crucial to our democracy and the ability of all citizens to participate meaningfully.

But in the immediate wake of the March 15 mosque attacks, many citizens from minority ethnic and religious communities told of how opinions and statements they routinely see on social media and other public platforms make them feel threatened, unwelcome and alienated.

Others have said these types of statements allow a climate to develop that is tolerant of harmful discriminatory expression.

A responsible government must consider these claims, and on a principled basis.

Consequently I have asked the Ministry of Justice to work with the Human Rights Commission to examine whether our laws properly balance the issues of freedom of speech and hate speech. The process should not be rushed, and I expect a report for public comment towards the end of the year.

The context for this stocktake is not just the horrific events in Christchurch, but also the history of free speech protection in New Zealand.

The reality is we already have laws to protect against what we call “hate speech”, which are the Human Rights Act and the Harmful Digital Communications Act. These criminalise incitement of racial disharmony through written or verbal expression, and refusal to remove social media posts which are bullying or include humiliating intimate information about someone.

Is it right that we have sanctions against incitement of disharmony on racial grounds but not, for example, on grounds of religious faith?

And how could there be any limitations on free speech, in light of the Government’s obligation under the Bill of Rights Act to protect it?

Our Bill of Rights draws on worldwide traditions to uphold basic human rights. The law has a close family link to one of the founding documents of the United Nations, the Universal Declaration of Human Rights.

The Declaration upholds freedom of thought and religion and the right to hold opinions “without interference”. But, forged in an international effort determined to eliminate the hatred and discrimination that drove World War II, it also called on us all to act towards one another in a spirit of “brotherhood”, and affirmed the right of every person to be protected against discrimination.

It drew on the revolutionary charters of the Enlightenment, the United States and French constitutions. Both protected free speech, with the Americans emphasising the equality of all people and the French stating liberty is the freedom to do anything which doesn’t harm others.

When speech threatens others, or is abusively discriminatory, then it has the potential to cause harm and encroach on the freedom of others.

As with the heritage that inspired it, our Bill of Rights recognises what it describes as justified limitations. It does so to ensure the exercise of a freedom by one person does not deny freedom to others.

Drawing the line is not simple. Protecting freedom of speech that challenges authority and orthodoxy will inevitably still cause offence to some.

But just being offensive or disagreeable does not necessarily make something harmful. Controversial issues in New Zealand, such as immigration policy or indigenous rights and reconciliation with the Treaty of Waitangi, will continue to be the subject of public debate. And so they should.

Protecting our crucially important right to freedom of speech, while testing whether the balance is right regarding “hate speech”, needs a robust public discussion from all quarters. This way we will ensure that all of our citizens’ rights are protected, and every person can express their humanity without fear.

We need to think outside the legal square to deal with ‘hate speech’

I think that relying on legislation and the courts to deal with ‘hate speech’ issues may be largely futile. Laws and courts poorly suited to dealing with most online ‘hate speech’

We already have laws that deal with abusive speech and incitement – since the Christchurch mosque massacres there have been a number of arrests, with several people remanded in custody. While this has picked up on some of the more extreme examples and may have sent a warning message to others there has been a quick resurgence in derogatory and divisive speech online.

Just waiting for the police and courts to deal with the worst is not likely to be much of a solution.

I don’t think that widening the laws to make less serious ‘hate speech’ illegal and subject to prosecution is a practical approach.

One problem is what speech justifies prosecution. Another is who gets to decide.

And with the speed at which speech circulates online the legal system is generally far too slow to react, and even slower to deal with it.

Confronting and ridiculing have been suggested as ways of dealing with ‘hate speech’. To be effective this has to be fast and fact based.

Perhaps something like the Press Council or Broadcasting Standards Authority could be set up, but geared for rapid response – combating bad speech with good speech.

This could involve research so that common ways of replicating divisive and derogatory speech (and there are common patterns and techniques for some of it).

A website as a source of fact based rebuttals would be useful.

This could be Government funded but non-political and non-legal, but with an ability to refer the worst cases to the police.

I think we have to be thinking outside the legal square in looking at ways to deal with this. Some legislative tweaks may be warranted, but the main problems probably (and should) fall short of being made illegal.

Well meaning waffle on free speech v hate speech from two MPs

The opinions from National Kaikoura MP Stuart Smith and Labour list MP Priyanca Radhakrishnan from Stuff:  The delicate balance of free speech v hate speech

Stuart Smith

Finding a means to restrict free speech by legislating against “hate speech” – the modern version of blasphemy – is something that must be carefully considered.

Hate speech and harmful extremism, on any platform, should not be tolerated, and we are right to scrutinise the role of social media in the context of the Christchurch tragedy, and the wider issue of extremism and hate speech.

Our legal system has recourse for those seeking to incite violence.

If we legislated against hate speech, who would decide what constitutes hate speech?

I think that this is one of the primary concerns.

That is why I believe that some of the calls to regulate free speech through legislation may go too far. There is a risk that, as with many difficult and potentially harmful issues that society grapples with, attempts to regulate social media too heavily will drive groups underground to the “dark web”.

Or talk about it in private away from the Internet. That probably happens already.

This issue is not new. Professor Paul Spoonley, Pro Vice-Chancellor at Massey University, was to give a very timely public lecture called The Politics of Hate in the Age of the Internet on March 19, but this was postponed.

He states that research points to a significant spike in online hate speech since 2017.

I agree with his argument that we should not allow it to become normalised and that this is not simply about legislation, but public awareness and discussion.

It is up to all New Zealanders to keep this important conversation going.

Priyanca Radhakrishnan

Access to the internet along with social media platforms has undeniably changed the way we consume information.

While discrimination and hate may have always existed, they now have new, powerful distribution channels.

Social media platforms have served to amplify hate speech across the world. It can be difficult to get rid of harmful messages and hateful comments on social media. Even if the original comment or post is deleted, someone, somewhere, could have already copied and shared it.

It’s time for social media platforms to act to prevent the spread of hate.

Major social media platforms like Facebook, Youtube and Twitter have poor records at preventing ‘hate speech’. They are too concerned about making money.

In the wake of March 15, we have the opportunity to confront the racism, xenophobia and hate that is proliferated through these channels. As individuals, we are each responsible for what we put out there on the internet, and as a country, there is an appetite for change.

As individuals we also have a responsibility to challenge harmful crap speech, both online and in our offline lives.

The Government believes the best and most enduring way to ensure change is to act collaboratively with other governments and with social media companies. The problem is global so the solution needs to be too.

The Prime Minister has committed to New Zealand playing a leading role in this change and Kiwis can expect to see more details of our plans in this space in the coming weeks.

I doubt that relying on international companies and other governments is going to deal with this adequately.

We need to come up with local means of dealing with global problems.

I am disappointed with these contributions to the discussion by Radhakrishnan and Smith. They may help keep the discussion going but they haven’t added much if anything themselves.

“It’s time to have a conversation about our hate speech laws”

Green MP Golriz Gharahman has been busy on Twitter encouraging “a conversation about our hate speech laws”.

She has first hand knowledge of hate speech, having been on the receiving end of awful attacks online.

This has also been promoted by the Green Party.

We now know that hate speech allowed to grow and be amplified online is undermining democracy around the world.

In New Zealand we know that it can be fatal.

The Bill of Rights Act protects free speech, but it’s balance against all our other rights. Our laws already protect individuals against harmful speech. You can’t threaten people. You can’t harm their reputation.

This isn’t really very accurate. The current laws don’t protect us, they give us some means of doing something about being threatened or having our reputations being harmed, but these means are usually far too slow and too inexpensive.

The police will only act on alleged threats if they thing there is a risk of serious harm.

Defamation proceedings are lengthy (Blomfield v Slater has taken six years so far to find that later had no defence, but damages are unlikely to be determined for another year or so) and very expensive. Most people can’t afford to protect their reputations via our current laws.

If defamation against individuals is already illegal, why should people be allowed to harm minority groups.

Including major minority groups?

What constitutes ‘harm’ is contentious and difficult to define. It can range from perceived hard feelings to escalation to actual physical harm.

Most new Zealanders would be shocked to find that our hate speech laws don’t cover religious minorities. They don’t cover gender, the Rainbow community, or the disabilities community.

All religions are minorities. There re no single ‘communities’ of Rainbow or for people with disabilities.

We need to change that.

We must make New Zealand the kind of place where we all feel truly safe and at home.

We certainly should work to change things for the better when it comes to speech.

But is it possible for everyone to feel ‘truly safe’ from hurt, while also feel truly safe to openly say what we think?

From follow up tweets:

You definitely shouldn’t be allowed to spread hate against a protected group based on your religion. Having well defined hate speech laws that assert equal protection for everyone’s rights and safety would do just that.

Deciding on “well defined hate speech laws that assert equal protection for everyone’s rights and safety” will be very challenging. And equal protection means there should not be specified ‘protected groups’.

It’s frightening that LGBTQIA communities aren’t protected against hate speech in NZ given the very real violence that translates to. Why is it unlawful to speak harmful mistruths about an individual and not a group?! Definitely time to realise we’re behind on this one.

Violent and intolerant language can contribute to actual physical violence – but a lot of harm can be done just with words.

A lot more tolerance of minority races, ethnicities, nationalities, political preferences, religions, gender and sexual preferences would be a major step forward.

But alongside this there must be some tolerance of speech that some people may feel uncomfortable with or offended by – it is common to hear people saying they hate opinions that differ from their own.

We need to have more than just conversations about how we address harmful speech, we need to have a robust debate about the balance between potentially harmful speech, and the freedom to speak in a normal and socially acceptable way.

ISP web blocks and online censorship debate

The Christchurch mosque attacks prompted unprecedented action from New Zealand Internet Service Providers, who tried to block access to the video of the attack.  This has just been extended.

We are heading into some important debate about censorship and free speech.

Newsroom:  ISP keeps Chch web blocks after Govt intervention

New Zealand’s largest internet provider has reversed plans to stop blocking websites which hosted videos of the Christchurch terror attack, after a last-minute intervention by the Government.

In the wake of the mosque shootings, a number of New Zealand’s biggest ISPs took what they themselves acknowledged was an “unprecedented step” – blocking websites which were hosting a video of the attack live-streamed by the alleged murderer, as well as his manifesto.

In an open letter explaining the move and calling for action from larger tech companies, the chief executives of Spark, Vodafone and 2degrees said the decision was the right one in “such extreme and tragic circumstances”.

On Tuesday evening, both Spark and Vodafone told Newsroom they would start to remove the remaining website blocks overnight.

“We believe we have now reached the point where we need to cease our extreme temporary measures to block these websites and revert to usual operating procedures,” a Spark spokeswoman said.

However, less than two hours after its initial response, Spark said the websites would continue to be blocked for several more days “following specific requests from Government”.

Newsroom understands the U-turn came after Government officials held discussions with the company, asking it to keep the blocks in place until after the official memorial service for the victims of the attack took place on Friday.

No indication of how much persuasion was required to prompt a rethink.

The ISPs’ original actions have raised issues of censorship, with the companies acknowledging that in some circumstances access to legitimate content may have been prevented.

Netsafe chief executive Martin Cocker said website blocking had been “a really useful short-term tool” to stop the spread of the content.

“They’ve [the ISPs] been really clear with everybody that they took on the filtering responsibility because they wanted to play their part in reducing the obvious harm occurring in the aftermath of the attacks, and they did that.”

But this leads to an important discussion on censorship. There is already online material that is ‘censored’, as it should be (child porn, snuff movies, terrorism related material), but there will always be pushes for more limits and also less limits.

Thomas Beagle, chairman of the NZ Council for Civil Liberties, said he had sympathy for the approach taken by ISPs following the “ghastly” attack, but the public needed to ask questions about whether similar blocking would occur in future.

“That was an exceptional situation and people took exceptional action – of course, the worry is now that it’s been done once, are people then going to start thinking, we can do it for other things as well?”

While there was an argument that the companies were simply exercising their contractual rights, Beagle said their near-monopoly in the telecommunications market meant there was a significant censorship issue.

“Civil liberties are traditionally concerned with government interference, but I think that when you’re talking about the dominant players who have 99 percent of the mobile market or more…that’s also an effective form of censorship as well.”

However, more traditional censorship by the Government could “extend and grow in an undesirable manner”, and would require a significant public conversation, he said.

There needs to be a lot of meaningful public discussion on the degree of censorship – as there has been over the Chief Censor recently ruling the terrorist’s manifesto harmful and there for illegal to possess or distribute in New Zealand (the easy availability internationally renders this a weak means of protection).

Censorship debate begins

What is clear is that the debate how to censor offensive material online is just beginning.

There has long been debate over censorship, but major events and actions in response will always draw more prominence to the arguments for and against.

Cocker said he supported the development of a formal, government-led process for blocking objectionable content when necessary, which would allow greater specificity in how content was blocked and set up oversight measures to avoid abuse.

“Those are the kind of things that come back to a government agency being empowered to take that responsibility, then all the telcos have got to do is just add the URL to the list and block it.”

However, Beagle said there was a question of whether ad-hoc arrangements would be preferable to a formalised process, given the rarity of an event like the Christchurch attack.

“Is it better to say hey, this is so out of the realm of normal day-to-day business we shouldn’t actually try and cater for it?

“I think it’s safe to say that we shouldn’t be rejigging our entire security infrastructure, internet filtering and censorship based on a one-off event which is utterly exceptional in New Zealand history.”

That’s an important point. A repeat of what happened in Christchurch seems very unlikely. Security measures should be reconsidered to look at how to minimise the risks, but public freedoms and free speech should not be over-restricted due to an abnormal one off situation.

Stuff imposes extensive commenting restrictions

Yesterday Stuff announced new terms and conditions for commenting on their website, which puts a lot of restrictions on types of comments and topics that be commented on. This is a flow on effect of  Christchurch Mosque attacks.

Immediately after the attacks David Farrar caused a lot of angst at Kiwiblog by imposing significant commenting restrictions, with anyone not identifying by their real name being put on auto-moderation (each of their comments needs to be approved by a moderator). There is still a lot of grizzling about it. From the last General Debate: (Monday, for some reason there wasn’t one yesterday):

DigNap15:

DF needs to change the name of this blog to
The sickly white liberal apologists blog.

Classical Liberal:

The moderation system is completely unfair to long term, reasonable KB supporters. I have always defended equal rights before the law for men, women, homosexuals, all ethnic groups.

But several of my perfectly reasonable comments are sitting here for hours.

I hope it’s just because it’s a slow Monday, not because the moderators have become immoderate!

Stuff updated yesterday – Terms and Conditions: User submitted content and comments

We (Stuff Limited) invite our readers (you) to post comments and profile information in a number of areas of the website.

The views expressed in the comments areas are not our views or opinions, nor the views or opinions of any of our staff or our related entities. We accept no liability in respect of any material posted in the comments areas, nor are we responsible for the content and accuracy of that material.

If you place reliance on material posted on this website you do so at your own risk, and you indemnify us (and our related entities) from any liabilities, claims, costs, loss (including consequential loss) or damage suffered or caused by reason of your reliance on any material posted in the comments areas.

Comment policy

Stuff welcomes comments from readers on our website.

We invite you to discuss issues and share your views. We encourage robust debate and criticism provided it is civil. But our comment section is a moderated online discussion, not a public forum.

We reserve the right to reject comments, images or links that:

  • are offensive or obscene;
  • contain objectionable or profane language – including use of symbols (we maintain a list of banned obscenities and comments featuring those words will be automatically rejected);
  • include personal attacks of any kind (including name-calling; insults; mocking the subjects of stories or other readers; or abusing Stuff journalists or contributors);
  • are discriminatory or express prejudice on the basis of race, ethnicity, country of origin, gender, sexuality, religion, or disability;
  • contain spam or include links to other sites;
  • are clearly off topic;
  • are deliberate lies or attempts to mislead. While we cannot review all comments for accuracy, we reserve the right to reject comments we consider, on the balance of probabilities, to be deliberate falsehoods;
  • impersonate an individual or organisation, are fraudulent, defamatory of any person, threatening or invasive of another’s privacy or otherwise illegal;
  • are trolling or threatening;
  • advocate or endorse violence, vigilantism or law breaking;
  • infringe on copyrights or trademarks;
  • are self-promoting;
  • violate the law or breach court-ordered suppressions or have the potential to breach future suppressions; or
  • constitute a contempt of court or that contain details of cases and individuals before the courts;
  • violate our terms and conditions for user generated content;
  • promote, advertise or solicit the sale of any goods or services;
  • nitpick other commenters’ spelling or grammar;
  • deny anthropogenic climate change;
  • deny the Holocaust;
  • add nothing to the debate;
  • just generally aren’t very nice.

That covers just about anything stuff decide they don’t want to publish – which is their their right on their website.

Those conditions are quite similar to what Whale Oil has operated under for several years.

Usernames are also bound by these Terms and Conditions and offensive usernames will be blocked. Using your real name is preferred best practice.

We reserve the right to cut, crop, edit or refuse to publish your content. We may remove your content from use at any time.

With rare exceptions, we will not usually enable comments on stories concerning:

  • 1080
  • allegations of criminality or misconduct
  • animal cruelty
  • beneficiaries
  • Christchurch mosque shootings of March 2019
  • court cases
  • domestic violence
  • fluoride
  • funerals
  • immigrants or refugees
  • Israel and Palestine
  • Kashmir
  • missing people
  • race
  • sexual orientation
  • suicide
  • Treaty of Waitangi
  • transgender issues
  • vaccination
  • vulnerable children

That’s a lot of topics deemed out of bounds for commenters.

They say they typically have several thousand comments a day submitted, so it’s a big workload monitoring them all.

Restricted or selective commenting is becoming more common.

Perhaps a reality is that media sites are not suited to open slather comments. Not only are they difficult to manage, they distract from their core purpose, to report news and to provide commentary.

Any site has the right to allow or not allow public comments.

Stuff: Our rights

We retain the right and discretion (but not the obligation) to edit, delete, reject or remove any comment which you post or seek to post in the comments or Stuff Nation areas.

As does any website owner or manager.

Kiwiblog Comments Policy:

Who has the right to post comments on this blog?

Apart from me, no-one at all has the right to post comments. Posting is a privilege, not a right.

Okay, so who is allowed to post comments here?

Anyone at all, up until the stage I ask them to stop or suspend them

There are plenty of other places that people can comment online, so it’s not really a restriction on free speech – before the Internet there was far less freedom to speak via newspapers, radio  and television.

Online discussion and debate will no doubt continue to evolve.

Trying to shut down speech a partisan overreaction

ll media should be considering how to deal with radical and provocative speech, and speech that could bolster extreme views and potentially actions.

But this (and I’ve seen similar elsewhere) is an alarming overreaction.

I have also seem claims that ‘virtue signalling’ is also responsible for various things.

Politically motivated attempts to put blanket bans on speech are not helpful in the current situation.