Plan for NZ system that will help parents protect their children online

…a carefully designed and flexible package that parents would sign up for when they purchase their phone and internet plan – a package that they pick and choose themselves, according to the level of protection they want to provide for their child.

The Internet has had a major impact on society. Many of us use it daily, it has become an integral part of our lives. There are many good things we can use the Internet for, but there are also many dangers, especially for children.

There is increasing evidence of the extent to which young people are routinely seeing horrible material on their social media feeds. The Youth and Porn study that came out late in 2018, commissioned by the Office of Film and Literature Classification, showed that of 2000 New Zealand teenagers aged between 14 and 17, three-quarters of the boys had seen online porn, and more than half the girls – including sexual violence and non-consensual sex. One in four had seen it before the age of 12. Most had not been looking for it, but they came across it anyway. Most had not talked about this with their parent or caregiver.

Such facts can make parents feel very disempowered and helpless.

It’s common for parents to have little idea what their children do and see online. There is a plan to trial a system in New Zealand to give them control over what their children can do.

Matt Blomfield is the victim of some of the worst online attacks and harassments, much of it via the Internet, based on a sustained series of attacks on the Whale Oil blog. He was also attacked and badly injured at his home by a man with a shotgun. This was witnessed by his wife and daughters.

Matt took Cameron Slater to court over this and after years of battling he won. Slater filed for bankruptcy earlier this year and his company, Social Media Consultants went into liquidation. Matt took control of the whaleoil,co,nz website, which he is now using to promote his plan to give parents better control over what their children do online.

Now if you go to you will see this:

In the minutes and hours following the shooting of nearly 100 Muslim worshippers at two Christchurch mosques on March 15 this year, Matt Blomfield’s 13-year-old daughter had live footage of the carnage shared to her Instagram account by four separate people. She watched the whole thing, filmed by the gunman on a GoPro attached to his helmet. She saw terror and panic; she saw real people ripped apart by real bullets. She saw the blood. She didn’t tell her parents.

Many of the other kids at her school also saw the video, as did many thousands of others around New Zealand and the world. Instagram, Facebook, YouTube… it was shared more than 1.5 million times. It just popped up on people’s – children’s – social media feeds, unasked for.

It was only some months later that his daughter told Matt what she’d seen. It’s a parent’s nightmare, he says. He felt keenly that his ability to raise his daughters the way he wanted to – that is, appropriately protected, with some control over the rate at which they are exposed to the complexities of the world – had been usurped by the giant corporations whose platforms bring horrible material straight to his kids’ devices.

It felt very wrong. Something needs to be done, he said to himself.

In fact, Matt had already begun work on “next”.  After years of putting energy into the fighting negative court battles with Slater, Matt wanted to work on projects that contribute positively. During his years of struggle he thought long and hard about the wider issues inherent in his personal battle: the immensely complex matter of balancing democratic access to the internet and freedom of expression on it, against controls to prevent it becoming a weapon of harm; the inability of our justice and enforcement systems to effectively respond to breaches of the law when they happen on social media; the sheer, global scale of the platforms that dominate the internet, and the difficulty for individual jurisdictions in controlling content.

When you are attacked and harassed online it can be very difficult to defend yourself and to stop the attacks. I know from my own experiences – the @laudafinem twitter account was used to attack many people with apparent impunity. It has only just been suspended: “Twitter suspends accounts which violate the Twitter Rules” – but that can be difficult to achieve, Twitter dismissed my complaints in 2015. Lauda Finem’s website was shut down in 2017 but they still have content online, including numerous breaches of name suppression orders. Courts are still dealing with some this, but they are very slow, with complaints made five years ago still not over.

This is bad enough for adults. There are also many risks for children.

In November 2016, he drafted a Universal Declaration of Rights Pertaining to the Internet. He managed to get some interest from the Privacy Foundation, with a little more interest expressed by organisations in the aftermath of the Christchurch shootings. He’d hoped it might get championed at government level, but so far that hasn’t happened.

He watched with considerable interest as Ardern headed overseas in the wake of the Christchurch shootings to try and win multi-lateral cooperation to better control the spread of harmful material. He noted the increasing public concern and debate about social media platforms but, along with that, the powerless handwringing that usually accompanies such conversations. Many people, and certainly many parents, not only worry about the material that children are watching, but are also deeply conflicted about both their ability and their right to do anything about it.

Matt has no such dilemma.

“As parents, we have a responsibility for our children not to watch mass shootings at age 13, or porn at age 10,” he says. “Let’s stop and take a look at what the problem is, the elephant in the room, which is what’s happening right here on our own shores. Our kids, here in New Zealand, are watching stuff that no parent would want them watching”.

“We’re sitting here worrying about youth suicide statistics, youth mental health, young kids who feel shit about their own bodies and their own lives, kids who are getting their sex and relationship education through free porn sites controlled by massive corporates. And we’re sitting here going, this needs to change. And we’re waiting for the government to do it. Waiting for Facebook to do. Waiting for Instagram to do it. Waiting for who?”

“Jacinda’s efforts are good, but only partially deal with the problem. Up until now, the corporates have decided what happens to us online, and now they’re deciding what steps they’re going to take to help us. We can’t leave it up to them. Let’s take the steps ourselves and get back some control.”

Matt believes it will take a community effort to save our children from the harmful effects of exposure to damaging and illegal material on the internet. Our own community, saving our own children.

“Who are we counting on to sort this out for us? And the answer is, it’s not one person’s fix. This is not just a corporate or government issue. It’s a collective issue. We need a combination of commercial businesses, academia and government to work together on this with a common goal of saving our kids.”

He’s right. We can’t rely on large corporates like Google, Facebook, Instagram and Twitter to protect us and our children. we can’t rely on our Government, who haven’t done much so far.

Perhaps we need someone like Matt to promote much better action, but the more support he gets the more chance of achieving something worthwhile.

He talked to people he knows in the technology sector, and it became apparent to him that the technology already exists that could put the power back into the hands of parents. What doesn’t exist, however, is a system around the technology to ensure that it’s easy to use, flexible enough to provide for individual choice and control, and expertly tailored to acknowledge important steps of a child’s developing maturity. In other words, this concept needed a comprehensive vision and, crucially, a plan.

That is what Matt’s doing next.

He’s begun putting together an informal working group, comprising technology experts in big data, AI and software development, child development specialists, media academics, and ISP and handset providers – as well as smart business minds, branding and sales experts. He’s casting his net wide, hoping other people with expertise and ideas in this broad area will get in touch.

He envisages a carefully designed and flexible package that parents would sign up for when they purchase their phone and internet plan – a package that they pick and choose themselves, according to the level of protection they want to provide for their child. Information will be provided about child development, and the levels of understanding inherent in each stage of a child’s developing intellectual and emotional maturity.

“People are daunted by the scale of the internet,” Matt acknowledges.

Daunting, but we will only remain helpless if we don’t do more to help ourselves, and our children and grandchildren.

“We know that China simply banned Facebook – they can do that because they are an authoritarian society. Of course, we don’t want to do that anyway, but it points to the difficulty of creating safeguards in a society like ours where we’re concerned about censorship and the fair balance of opinions. So, let’s give the power back to the people and let the people decide.

“Big corporations want your data. They use it to learn a lot about you, to push advertising and sell you more. On the other hand, they do not enable you to have access to that data, and there is no AI looking out for people in this equation.  There is no balance of data, no fair exchange of value.  As an example, Google is starting to get its hands on individuals’ health data (Stuff: ‘Google wants to get its hand on your health data’, 17-11-19) without people’s consent; its objective is to grow its revenues.

“My plan is about taking that control away from the corporates, and taking the responsibility away from them in some sense because we don’t trust them with that responsibility. We’ll give parents the choice to decide what they can and can’t see.”

New Zealand is the perfect place to trial such a system, he believes.

If enough of us think that something can and should be done, we can help make it happen.

If you are interested in discussing this with Matt, send an email to:  MATT@BLOMFIELD.CO.NZ

Watch this space.

Herald announces digital subscription model for premium content

NZ Herald has announced pricing for it’s ‘premium’ digital subscription – $5 per week, although a ‘special introductory offer’ will be offered next week when the premium content launches.

That’s $260 a year, quite a bit for part of one media company’s content. It’s a risk, especially if the free content is watered down too much and keeps promoting so much trivial click bait content.

This has been a long time coming, it has been talked about for years.

NZ Herald launches digital subscriptions for premium journalism, reveals pricing

NZME will become the first major New Zealand media business to unveil digital subscriptions – costing $5 a week, with a special introductory offer to be announced next week.

While much of the content on will remain free, digital subscribers will access a range of premium content across business, politics, news, sport, lifestyle and entertainment including indepth investigations, exclusive reports, columns and analysis. There will also be more foreign, premium content from a range of internationally renowned mastheads.

I can get all the international news and analysis I want now.

People who have five-, six- or seven-day subscriptions to the NZ Herald or one of NZME’s five regional newspapers – the Northern Advocate, Bay of Plenty Times, Rotorua Daily Post, Hawke’s Bay Today and Whanganui Chronicle – will have automatic access to premium content. Print subscribers will be contacted next week with details of how to activate their digital subscription.

So it’s free for newspaper subscribers – for now at least.

They say it will help support ‘quality journalism’ and will provide ‘indepth analysis and insight’. If it allows them to do more of this that will be a good thing, provided they get sufficient subscriptions to keep funding it.

One problem with important investigations being limited subscriptions is that it will limit the impact.  The glare of publicity can sometimes impact on negative things that have been happening or have been done, and that publicity will be reduced if limited to subscription content.

I presume they will promote summaries or teasers of premium content so people know what they might be missing out on.

On a related matter – three years ago my household decided to drop our ODT print subscription, because we found we were hardly reading it, and could get sufficient news online.

Last year we restarted our ODT subscription. We found we missed it, especially for local (Dunedin and Otago) news, and also information about what was happening in the area. It does a good job generally on local news, and we felt it was worth supporting. And we are reading it more now – there’s something about flicking the large paper pages and browsing.

This is one reason why I won’t be subscribing to the Herald online.

The ODT republishes some Herald content – I wonder if this will continue and will include premium content?

Action Station report on hate speech, versus free speech

It is actually working a lot, but often not how people want it to work. Can we do much about it? or do we just have to go with how things evolve, both good and bad?

Action Station has just released  The People’s Report on Online Hate, Harassment and Abuse.

It is not ‘the people’, it is ‘some people’ who have done the report. Good on them, but they should not claim to speak for ‘the people’.

For decades, the internet has been hailed as a groundbreaking interactive marketplace of ideas, where anyone with access to data and a device can set up a stall.

Online tools have made it possible to communicate easily with friends and whānau around the world, sell and purchase goods and services, enrol to vote, raise billions for charitable causes or start-up businesses, and even hail a ride or meal to your front door.

The internet has helped give people who have historically been locked out of democracy by discrimination or poverty a way to voice the needs of their communities and organise at scale.

Over the past four years, ActionStation members have used digital tools and platforms to connect and collaborate with hundreds of thousands of other New Zealanders who share their vision and values to engage powerfully in our democracy.

In the 21st century, social media has become the new public square.

The downside to this unparalleled information exchange and connectedness is that the internet also provides a powerful and relatively cheap way for groups and individuals to spread hate, fear, abuse and mis/dis/mal-information across time and space, and without transparency.

The term ‘fake news’ has been widely used to refer to a range of different kinds of false and harmful information.

While ActionStation has been at the forefront of exploring and facilitating digitally-enhanced democratic participation in New Zealand, we have also been exposed to these downsides.

It is that exposure that has prompted this report.

In 2015, the National-led government passed the Harmful Digital Communications Act (HDCA). It states that a digital communication should not:

“…denigrate a person’s colour, race, ethnic or national origins, religion, gender, sexual orientation or disability.”

In 2019 we ask: has the Act worked? Is the internet free from prejudice and harm? Do people feel safe to participate freely in conversations online? Or is there more work to do?

They say their findings show:

Why is it worse for people from some groups?

The Harmful Digital Communications Act (2015) is a powerful piece of legislation that was enacted to address the issue of online abuse. However it is not sufficient to address every issue of online hate, harassment and abuse.

The law (while broad) is designed for only a limited number of situations where online harm occurs. Specifically, it appears to work well in many cases of one-to-one abuse, where an individual who is being abused can contact Netsafe and identify the abuser.

There have however been instances, some high profile, where seemingly clear cut cases of abuse and harassment are deemed to not breach the act,such as when a Facebook user commented that writer Lizzie Marvelly should try “bungy jumping without the cord”.

The tools of the HDCA appear unsatisfactory in other cases of serious abuse online, such as when an organised group (often using ‘shill’ accounts and fake identities) are targeting an individual. There are also cases where hate is being directed at a group of people, but not necessarily targeted at an individual who can lay a complaint, where there is still a considerable harmful ‘bystander’ effect.

In New Zealand, the Human Rights Act currently includes provisions that cover both civil and criminal liability for the incitement of racial disharmony. However, the threshold is extremely high and there is a profound scarcity of successful racial disharmony claims to the Human Rights Review Tribunal.

Racial disharmony provisions only apply to instances where hostility is stirred up amongst people other than those who are the subject of the hate. The expression of hatred in and of itself (or the effect of that hatred on the person or group it is directed towards) is not sufficient for the law to apply. The hate speech provisions in the Human Rights Act also apply only to colour, race, or ethnic or national origins and not religion. ‘Hate speech’ against religion, or even religious people, is not unlawful.

Any laws against hate speech and harassment should be generic and protect anyone who is targeted.

One of the most significant themes to emerge in this research was the need to attend not just to individualised concerns (e.g. individual rights and privacy) but also to collective dynamics and wellbeing. Therefore any policies that are developed to protect people online and ensure their ability to participate freely and safely online need to have at their centre indigenous and collectivist thinking, especially as Māori have historically (and presently) been among those who are most targeted by hateful speech.

Māori digital rights advocate Karaitiana Taiuru says that two Māori values in particular could help support those who build the technology that permeates so much of our lives to build tools for a safer, better internet. Manaakitanga (How can we build tools that encourage users to show each other care and compassion and work to uplift each other?) and Kaitiakitanga (How can we build tools where all users become the guardians of the experience and data in a highly trusted, inclusive, and protected way?).

I’m not sure why ‘indigenous thinking and values’ in particular should provide the solutions. That’s ironic given their support of diversity. Surely all thinking and values should be considered.

After that their report stops. But back to the start they have some action – Sign the Petition – but as of now the link to that doesn’t work, but another link gets to it:

The time has come for urgent action to address the significant threats online hate, harassment and abuse is causing to New Zealanders.

We are asking Justice Minister Andrew Little to implement our recommendations and work with the online platforms to ensure our online spaces  are safe for everyone.

If the internet is the new public square, it is imperative that lawmakers ensure the ability of all New Zealanders to access reliable and credible information about issues of public importance, and the ability of everyone in this country to participate safely in public conversations about those issues.

Add your name to the petition to show your support and help us fight for change.

Proposed solutions:

If the internet is the new public square, it is imperative that lawmakers ensure the ability of all New Zealanders to access reliable and credible information about issues of public importance, and the ability of everyone in this country to participate safely in public conversations about those issues.

Based on our analysis, we are making four recommendations to the New Zealand government:

Remove: Ensure platforms are active in removing harmful content quickly. An investigation into the most effective method to do this would be required, but the responsibility should be placed on the platform, not the users.

Reduce: Limit the reach of harmful content. Neither the platforms nor the users who create hateful and harmful content should benefit from algorithms that promote divisive and polarising messages.

Review: The New Zealand government needs to review our hate speech laws, the Harmful Digital Communications Act, the Domestic Violence Act, the Harassment Act and the Human Rights Act to ensure they are fit for purpose in protecting people online in the 21st century.

Recalibrate: One of the most significant themes to emerge in this research was the need to attend not just to individualised concerns (eg individual rights and privacy) but also to collective dynamics and wellbeing. Any policies that are developed to protect people online need to have indigenous and collectivist thinking at their centre. They should also ensure that all internet safety / hate speech agencies funded by the Crown reflect the increasing diversity of our country.

They won’t solve all of the problems with the internet, or even all the ones described in our report. But it would be a start.

More is explained at The Spinoff:  The internet is the new public square. And it’s flowing with raw sewage by Leroy Beckett, the Open Democracy campaigner at ActionStation

Speech and behaviour online are issues that certainly need to be considered, but far more widely than by Action Station.

Free speech is a fundamental part of an open democratic society. Protections which limit free speech need to be carefully considered.







Mental health of online moderators

An ODT article today doesn’t seem to be online, but it refers to this: We need to talk about the mental health of content moderators

Selena Scola worked as a public content contractor, or content moderator, for Facebook in its Silicon Valley offices. She left the company in March after less than a year.

In documents filed last week in California, Scola alleges unsafe work practices led her to develop post-traumatic stress disorder (PTSD) from witnessing “thousands of acts of extreme and graphic violence”.

Facebook acknowledged the work of moderation is not easy in a blog post published in July. In the same post, Facebook’s Vice President of Operations Ellen Silver outlined some of the ways the company supports their moderators:

All content reviewers — whether full-time employees, contractors, or those employed by partner companies — have access to mental health resources, including trained professionals onsite for both individual and group counselling.

But Scola claims Facebook fails to practice what it preaches. Previous reports about its workplace conditions also suggest the support they provide to moderators isn’t enough.

How moderating can affect your mental health

Facebook moderators sift through hundreds of examples of distressing content during each eight hour shift.

They assess posts including, but not limited to, depictions of violent death – including suicide and murder – self-harm, assault, violence against animals, hate speech and sexualised violence.

Studies in areas such as child protectionjournalism and law enforcement show repeated exposure to these types of content has serious consequences. That includes the development of PTSD. Workers also experience higher rates of burnout, relationship breakdown and, in some instances, suicide.

This is a modern problem that an increasing number of people are exposed to. The Internet has made a huge amount of information readily available to most of the world, but unfortunately a lot of material reflects the worst of the world, and the worst of human nature.

We also need to address the ongoing issue of precarity in an industry that asks people to put their mental health at risk on a daily basis. This requires good industry governance and representation. To this end, Australian Community Managers have recently partnered with the MEAA to push for better conditions for everyone in the industry, including moderators.

As for Facebook, Scola’s suit is a class action. If it’s successful, Facebook could find itself compensating hundreds of moderators employed in California over the past three years. It could also set an industry-wide precedent, opening the door to complaints from thousands of moderators employed across a range of tech and media industries.

Rapidly changing use of technology means that solutions to problems introduced by the technology will struggle to keep up.

Note that I am one online moderator who has no concerns about the exposure I get and have to deal with. The problems here are very minor in comparison to some parts of the Internet, and I am not reliant on this for earning a living so it is choice rather than necessity that I continue to the relatively trivial moderation concerns here.


US Supreme Court rules on online sales tax

The US Supreme Court has overturned a ruling that had given online retailers a way of avoiding some state taxes.

NY Times: Supreme Court Clears Way to Collect Sales Tax From Online Retailers

Internet retailers can be required to collect sales taxes in states where they have no physical presence, the Supreme Court ruledon Thursday.

Brick-and-mortar businesses have long complained that they are disadvantaged by having to charge sales taxes while many of their online competitors do not. States have said that they are missing out on tens of billions of dollars in annual revenue under a 1992 Supreme Court ruling that helped spur the rise of internet shopping.

On Thursday, the court overruled that ruling, Quill Corporation v. North Dakota, which had said that the Constitution bars states from requiring businesses to collect sales taxes unless they have a substantial connection to the state.

This could be significant for New Zealand. If internet retailers like Amazon have to comply with all the state taxes in the US (a complex thing) depending on the location of the purchaser,then it should be simple to also comply with tax requirements for other countries.

Writing for the majority in the 5-to-4 ruling, Justice Anthony M. Kennedy said the Quill decision had distorted the nation’s economy and had caused states to lose annual tax revenues between $8 billion and $33 billion.

But there could be a downside. If online retailers are forced to charge more tax in the US they may look for more sales in places where they can get away without charging tax.

Harmful Digital Communications

The Harmful Digital Communications Act has taken a long time to come into force, but from today anyone who is experiencing online abuse, harassment and cyberbullying can report to the appointed agency, Netsafe, who will “receive, assess and investigate complaints”. What they will do to help targets of abuse and how effective this will be is yet to be seen.

From Netsafe’s website:

Anyone who is experiencing online abuse, harassment and cyberbullying can get help from Netsafe thanks to the Harmful Digital Communications Act (the Act). Netsafe will receive, assess and investigate complaints related to harmful digital communications from 21 November, 2016.


The Act tackles some of the ways people use technology to hurt others. It aims to prevent and reduce the impact of cyber-bullying, harassment, revenge porn and other forms of abuse and intimidation.

The Act provides quick and affordable ways to get help for people receiving serious or repeated harmful digital communications. A digital communication is harmful if it makes someone seriously emotionally distressed, and if it is a serious breach of one or more of the 10 communication principles in the Act.

What are harmful digital communications?

Harmful digital communications take a variety of forms such as private messages or content that others can see. It includes when someone uses the internet, email, apps, social media or mobile phones to:

  • send or publish threatening or offensive material and messages;
  • spread damaging or degrading rumours about you; and
  • publish online invasive or distressing photographs or videos of you.

What are the 10 communication principles?

The 10 principles work as a guide for how people should communicate online. Netsafe and the District Court will look at these when deciding if a digital communication breaches the Act.

The 10 principles say that a digital communication should not:

  1. disclose sensitive personal facts about a person;
  2. be threatening, intimidating, or menacing;
  3. be grossly offensive;
  4. be indecent or obscene;
  5. be used to harass a person;
  6. make a false allegation;
  7. breach confidences;
  8. incite or encourage anyone to send a deliberately harmful message;
  9. incite or encourage a person to commit suicide; and
  10. denigrate a person’s colour, race, ethnic or national origins, religion, gender, sexual orientation or disability.

How to get help?

If you are concerned about the immediate safety of you or someone else, please call 111. If you or someone you know needs help with a harmful digital communication, contact Netsafe toll free on 0508 NETSAFE or complete a complaint form at

Netsafe can look into your complaint and tell you if there’s anything else you can do to stop the abuse and stay safe. We may also work with you and the person harassing you to get them to stop.

If Netsafe can’t resolve things, you can apply to the District Court for help – but you have to have tried to resolve things with Netsafe first.

How can the District Court help?

The court deals with cases of serious or repeated harmful digital communications that Netsafe hasn’t been able to resolve.

The court will look into whether the person harassing you has seriously breached, will seriously breach or has repeatedly breached one or more of the 10 communication principles. It will also consider how people responded to the advice Netsafe provided.

The court has the power to order people to stop their harmful digital communications and take action including:

  • Ordering material to be taken down;
  • Ordering someone to publish a correction, an apology or give you a right of reply;
  • Ordering online content hosts (like social media/telecommunication companies or blog owners) to release the identity of the person behind an anonymous communication; and
  • Order name suppression to protect your identity or the identity of anyone else involved in the dispute.

Anyone who ignores the District Court’s orders can be prosecuted and penalised. The penalty is up to six months in prison or a fine up to $5,000. Companies can be fined up to $20,000.

What if the situation is really serious?

The Act also includes a criminal offence to penalise the most serious perpetrators. It is illegal to send messages and post material online that deliberately cause somebody serious emotional distress.

Police will handle these most serious cases. They may prosecute a person or company if:

  • they intended the communication to cause harm;
  • it’s reasonable to expect that a person in your position would be harmed by it; and
  • you were harmed.

The court will consider a variety of factors including how widely the material spread and whether what was said was true or not. The penalties for this offence are a fine of up to $50,000 or up to two years’ jail for an individual, and up to $200,000 for a body corporate.

How this affects  Your NZ and you if you comment here: Harmful communications and Your NZ


Low turnouts and online voting

Local body election turnouts have been very low again, with most on the low forties – Auckland couldn’t even make 40%, in part probably due to the fact that Phil Goff was always anointed by media as a shoe in.

General election turnouts have been declining for some time as well.

In the modern world where communities and media are so fragmented is it possible to ever get any semblance of civic pride back? Most people simply don’t care about local body elections in particular.

Even those who do think they should vote struggle to front up – like me.

I filled in my voting papers with difficulty on Saturday morning and delivered them half an hour before closing (thanks DCC for having people with voting boxes picking up votes from cars in the Octagon).

A mixture of not knowing anything about most of the candidates and an awfully difficult and confusing system of voting makes procrastination easy. I seriously considered not bothering to vote.

I voted on four things.


Eleven candidates to choose from. The incumbent was very unlikely to lose, and there was a lack of strong alternatives. Under STV ranking them 1-11 was easy enough.


Forty three bloody candidates that require ranking. This is a major task to do anything other than randomly.

I started by numbering those I didn’t want elected from 43 up.

Then I numbered ones that I supported starting from 1 then working my way down.

Then I had about thirty in the middle to decide on. This became increasingly random as I worked my way up and down. Then the sequences didn’t meet in the middle, so i had miscounted somewhere.

And I didn’t care. I knew that would invalidate my vote from where I stuffed up and I didn’t care whether that was near the top or the bottom of the sequence. I just gave up.

Community Board

This was easier, with ‘only’ 12 to rank. I hardly knew anything about most of them but I looked at their pictures and read their blurb and took a stab.

Regional Council

I thought this was relatively easy, with only 10 candidates. I even knew one of them and knew of one or two others. So I ranked them. Then the fine print was pointed out to me – all the others were STV votes but the Regional Council isn’t, so I should have just ticked the six I wanted!

So I scribbled out my ranks and ticked beside them. I don’t care if that counted or not.

There must be a better way to vote.

Postal Voting

Postal voting was introduced to try and stem the decline in turnout, unsuccessfully.

There are significant flaws with postal voting.

It is common for people not to change their electoral roll addresses, especially in a university town like Dunedin. Many papers arrive at addresses where the voters don’t live any more. I received papers for someone who hasn’t lived here for several elections.

A stupid thing about enrolling is you are sent a letter by the Electoral Commission saying that if you don’t live there any more then let them know. I’m not sure how you are supposed to get this letter.

If there is no reply they assume you must still live there. This is nuts.

Postal voting is ideal for procrastinators – it’s very easy to put off voting until tomorrow. An and when it’s too late it doesn’t matter, you don’t know most of the people others voted in anyway.

Online Voting

There are strong supporters of line voting, and also strong opponents.

A trial of online voting was seriously considered by some cities and regions this election, but that fell through.

Lynn Prentice appears to not favour online voting: Online voting – the only choice for idiots

As a  computer programmer and someone who has been involved in politics for decades, I’m always amazed at idiots like Malcolm Alexander of the LGNZ talking about something that they clearly don’t understand the technicalities of. Online voting is way too fragile to roll out. And anyway young voters are still going to not have their voting details at hand.

In his language an idiot is someone he disagrees with, and I’m sure he’s called me an idiot more than once already.

I don’t think online voting could be much worse than postal voting, and you might get more people voting.

At the very least I think we should have online tools to help us vote, especially in the complex local body elections.

An app an a website that made it easy to rank (and show you where a tick was required) candidates would have made voting much easier for me, especially if it included candidate information along with links to their websites, Facebook pages and Twitter accounts. Then some real research would be possible.

You can’t just stick a pin in a smart phone.

Online could also randomise the candidates so the Andersons don’t get an unfair advantage over the Willamsons. The voting papers are randomised, but when the Candidate Information booklets are in alphabetic order this adds to the voting confusion.

If I could rank candidates online, then read the results and write them onto my voting papers I think I would put much more effort into voting.

As things are now Lotto is much easier to play than local body voting – and the chances of a good result are about the same.

There must be a better way. I don’t think a properly designed online system would be any worse than what we currently have. Sure it could be abused, but so can postal voting, and I don’t think the proportions of vote cheating would be significant in most elections.

Labour lagging online

The biggest announcement in what is touted as game changing policy for Labour doesn’t even feature on their website.

Labour announced three parts to their housing policy over the last few days, with the biggest announcements in an Andrew Little speech yesterday.

The first two policies feature on Labour’s website, but yesterdays big announcements are absent.


Are party websites not important any more?

Ok, they are promoting their policy announcements on Facebook at it has a link to Our Comprehensive Housing Package

But their home page makes no mention of this and has no link to it. And it still shows ‘Latest Policy’ as their Working Futures Plan.

Maybe someone forgot about the basics of website maintenance.

I see that while their Facebook page also features their 100th birthday:


But no sign of that on the homepage on their website either. That’s slack.

There’s no excuse for being laggardly online, especially with what is supposed to be a game changer policy – or at least needs to be a game changer, if their housing policy flops then Labour will struggle to get any traction.

How do we make Twitter and the internet a kinder place?

Lisa Owen finished her interview with Jon Ronson on The Nation about public shaming – see Ronson on online shaming – asking “How do we make Twitter and the internet a kinder place?”

One way to do this is to create and maintain kinder places, and I like to think that’s what we have done here with Your NZ.

Another way is to keep reminding yourself that the name or pseudonym you might feel like attacking is usually associated with a real person who is possibly much like yourself. Feelings and reactions can be difficult to exprewss and easy to ignore in cyber conversations, especially with the tight character restrictions that Twitter imposes.

Lisa Owen: how do we make Twitter and the internet a kinder place?
Well, I think conversations like this. I mean, my book came out; Monica Lewinsky came out with a TED talk which I thought was wonderful. Good, important thinkers like Glenn Greenwald are kind of jumping on it too.

And I think if— I think the best thing that can happen is if you see an unfair or an ambiguous shaming going on, speak up. Say something about it. And it’s going to be no question that the shamers will turn on you, and, believe me, I’ve experienced that over the past few months, but it’s the right thing to do. Because a babble of voices talking back and forward about whether something’s deserved or not, that’s democracy.

I think that speaking up and confronting bad and nasty online behaviour is important. Sometimes it works. If you get in early you can sometimes shut down online bullying or at least swing the debate to a more even battle rather than a mob attack against one.

But it has it’s risks. I know this from experience over the past few years that I have been actively involved in blogs and to a lesser extent Twitter.

I’ve been banned from Whale Oil, Public Address and from Dim Post for speaking up against what I thought was awful, or presenting a view that ran against the forum.

I’ve been banned a number of times from The Standard. This has usually involved me standing my ground against mob attacks until the ‘moderator’ pings me for ‘disrupting the blog’ – which is exactly the intent of the attacks tactics used against me (and others, it was a common means of shutting down and kicking out alternative voices there).

Despite commenting at Kiwiblog far more than anywhere else I haven’t been banned from there, but I have also been subjected to mob attacks, some insiduous threats, either misguided or malicious ongoing criticism and deliberate lying smears lasting for months or years (Manolo is a notable resident troll).

And as a result of moderating potentially defamatory comments here on Your NZ, providing a right of reply, and confronting unsubstantiated and false accusations on Twitter I have found myself on the receiving end of some particularly insidious attention from recidivist online attackers, the full extent of which I can’t yet reveal for legal reasons but will get that story out into the sunlight if and when I’m able to.

But to make at least parts of the Internet kinder places the bullies have to be confronted and exposed, or they will keep attacking and bullying.

Thanks to those of you who have helped make Your NZ a kinder place to discuss and share things. It can be done, and if it works well it will grow and spread,

A healthy democracy needs diverse opinions openly expressed and issues robustly debated. It also requires decency, respect of others, respect of the right to disagree, and recognition of the responsibilities involved with free speech.

Good things often don’t come easily but if we keep working on it we can and will contribute to making the Internet a kinder place.

It’s worth remembering (the Bible has some wise quotes):

 “All things whatsoever ye would that men should do to you, do ye even so to them.”

And the similar Mosaic law:

“Whatever is hurtful to you, do not do to any other person.”

Ronson on online shaming

Welshman Jon Ronson was interviewed on The Nation yesterday about personal attacks and ‘public shaming’ online.

Welsh writer Jon Ronson spent three years tracking down victims of public online shaming.

His latest book So You’ve Been Publicly Shamed looks at what happens after the flurry of tweets and posts sweep through cyberspace.

JonRonsonThe interview begins:

Lisa Owen interviews Jon Ronson, author of ‘So You’ve Been Publicly Shamed’

Lisa Owen: It’s a phenomenon of the digital age.. online-shaming. But what happens after the flurry of outraged tweets and posts sweeps through cyberspace? Award-winning writer and documentary maker Jon Ronson spent three years travelling the world to find out. The result was his book “So You’ve Been Publicly Shamed”. I caught up with Ronson in Brisbane and asked what prompted his interest in the dark side of social media.

Jon Ronson: I guess in the early days of social media, I was a bit of a shamer like everybody else. You know, I’d tear somebody apart for stepping out of line. And then it wouldn’t even cross my mind to wonder whether the people I’d destroyed or helped to destroy were okay or were in ruins. And I thought, you know, ‘This isn’t necessarily who I want to be,’ because I felt that no longer were we shaming people who deserved it; it’s no longer were we doing social justice.

What we had started doing instead was tabbing into private individuals, who had done almost nothing wrong; just, kind of, made a joke that came out badly on Twitter. And it was like we had lost a capacity for empathy, and also lost our capacity to distinguish between serious and unserious transgressions.

So it suddenly felt really important to me that I would go around the world and meet the people that we had destroyed to rehumanise them, I guess.

I do want to ask the ‘why’ question, then, because people seem to act very differently online than what they would if they spoke to you face to face. So why do you think that is?

Well, I think there’s a number of reasons. I think, obviously, the drone strike operator doesn’t need to think about the village that he’s just blown up, and on the internet, we’re like drone strike operators. And also, I think the snowflake doesn’t need to feel responsible for the avalanche.

So if hundreds of thousands of people are tearing about a single person, we don’t need to feel responsible for it. And also, we play this, kind of, psychological tricks on ourselves.

We think, ‘Okay, that person we just destroyed, I’m sure they’re fine now.’

Or we think, ‘That person we just destroyed, oh, they’re probably a sociopath.’ So we’re constantly coming up with psychological tricks to make ourselves feel not so bad about destroying people.

Or trying to destroy people, trying to destroy their reputations or their credibility.

This is pertinent to New Zealand because I’ve seen what he talked about happening frequently online involving New Zealanders, often Involving mob attacks.

It’s not just mob attacks, sometimes blogs or individuals sustain attacks for months or years on people. Sometimes politicians are the targets, like Helen Clark and John Key in particular – the bigger they are the bigger the attempted fall.

This happens on Twitter and Facebook. It is done by blogs, notably Whale Oil and Lauda Finem to the extent sometimes of ongoing smear campaigns that can amount to alleged and actual defamation. And it has been done within comments threads, I’ve seen it often in the past on Kiwiblog and The Standard, usually done by resident trolls.

Sometimes the attacks are done openly by known identties, sometimes by people hiding behind pseudonyms.

It can be very difficult for the targets of the abuse to do anything about, and it can potentially and actually be very damaging.

Sometimes it has resulted in court action, either as a defence against attacks or as an additional means of attack. Recently Colin Craig took it to extraordinary lengths by letter box dropping a booklet across the country.

Overseas examples are worse than I’ve seen here…

I want to use and talk about an example from your book – a woman who shamed two IT workers, who- she shamed them on Twitter; they were telling rude jokes at a conference. One of the guys lost their job. And then what happened was Twitter turned on her. And the tweets said things like, ‘Cut out her uterus. Kill her. F… the bitch. Make her pay.’ And someone even described shooting her in the head, close-range. I mean, these were apparently everyday people, weren’t they, saying these things?

Everyday people – I mean, when you get to the heart of, like, why people are shamed on social media, it seems to be always people who are perceived to have misused their privilege. So one of them lost his job, and then she, in shaming them, was perceived to have misused her privilege, like she had publicly shamed these two men to her 12,000 Twitter followers or whatever, so people just, sort of, tore her apart.

Even worse, actually, because when you are a woman in a shaming, the range of insults is way worse. You know, when a man is shamed it’s, ‘I’m going to get you fired.’ When a woman shamed it’s, ‘I’m going to rape you and cut out your uterus,’ and so on. But the problem here is the misuse of privilege.

…but what happens in New Zealand is shameful nevertheless.

On Jon Ronson (Wikipedia):

Jon Ronson (born 10 May 1967) is a Welsh journalist, author, documentary filmmaker and radio presenter whose works include the best-selling The Men Who Stare at Goats (2004). He has been described as a gonzo journalist,[2] becoming something of a faux-naïf character himself in his stories.[3]

He is known for his informal but sceptical investigations of controversial fringe politics and science.

Video:Interview: Jon Ronson on public shaming:

Full transcript at Scoop: Lisa Owen interviews Jon Ronson