How to regulate the Internet (vaguely)

How to fix speech on the Internet? It will take a lot more than this.

Jordan Carter (chief executive, InternetNZ) and Konstantinos Komaitis (senior director, global policy development and strategy, at the Internet Society) give some general ideas on how the Internet might be regulated to try to prevent it from being exploited by terrorists and extremists – How to regulate the internet without shackling its creativity

At its most basic, the internet is a decentralised technology, a “network of networks” that spans the globe, moving vast amounts of data and services. Its infrastructure layer is where protocols and standards determine the flow of data and enable independent networks to inter-operate voluntarily. A healthy infrastructure layer keeps opportunities open for everyone, because it is where unhindered innovation happens; where we build the technologies and the businesses of tomorrow.

The Christchurch terrorist did not put up a server to broadcast the video. Instead, he used the tools offered by the platforms most of us enjoy innocently. In other words, he did not directly use the internet’s infrastructure layer, but applications that run on top of it.

And this is exactly where the disconnect is. Most new rules and government intervention are spurred by illegal content that happen on the top layer of the internet’s infrastructure – the applications layer, where content exists and proliferates. Yet these rules would have sweeping implications for the infrastructure layers as well.

Interfering with the infrastructure layer, even unintentionally, to fix problems at the content layer creates unintended consequences that hurts everyone’s ability to communicate legitimately and use the internet safely and securely. The internet is a general-purpose network, meaning it’s not tailored to specific uses and applications. It is designed to keep networking and content separate. Current regulatory thinking on how to address terrorist, extremist and, in general, illegal content is incompatible with this basic premise.

That’s why we urge all governments working to protect their citizens from future terrorist and extremist content to focus on the layer of the internet where the harm occurs. Seeking expertise is how governments should regulate in the internet, but including only certain companies in the process could be counterproductive. All this does is cement the market power of a few big actors while excluding other, critical stakeholders.

As world and tech industry leaders gather in France for the Christchurch Call, we ask them to focus on interventions that are FIT for purpose:

Fitting – proportionate, not excessive, mindful of and minimising negative and unintended consequences, and preserving the internet’s open, global, end-to-end architecture;

Informed – based on evidence and sound data about the scale and impact of the issues and how and where it is best to tackle them, using ongoing dialogue to deepen understanding and build consensus;

Targeted – aimed at the appropriate layer of the internet and minimising the impact on the infrastructure layer, whose openness and interoperability are the source of the internet’s unbounded creativity and a rich source of future human flourishing.

That’s ok as general advice, but it provides little in the way of specific ideas on how to regulate speech and media without stifling it’s strengths.

The biggest challenge remains – how to very quickly identify and restrict hate speech and use of the Internet by extremists, without impacting on the freedom to exchange information, ideas and artistry.

Even from my own very narrow experience I know that people intent on spreading messages that many people would object to can be very determined and go to some lengths to try to work around any restrictions imposed on them.

Kiwiblog recently put in place much more monitoring and clarified what was deemed unacceptable speech, but those stated restrictions were quickly flouted, so offending comments must be being passed by people now doing the moderating.

It will require either some very smart algorithms that are able to adapt to attempts to work around them,  or a lot of monitoring and occasional intervention that would require many people all with similar levels of good judgment.

Neither approach will be perfect. I have concerns that rushing to restrict bad speech will increase impediments for acceptable speech.

 

Jordan Carter on how to eliminate terrorist and violent material online

Jordan Carter, CEO of InternetNZ, has some ideas on how to help make Jacinda Ardern’s ‘Christchurch call’ work.

(I really wonder if labelling the attempt by Ardern to get social media companies to ‘eliminate’ terrorism online the ‘Christchurch call’ is a good idea. I think it is inappropriate.)

The Spinoff:  How to stop the ‘Christchurch Call’ on social media and terrorism falling flat

If we take that goal of eliminating terrorist and violent material online as a starting point, what could such a pledge look like, and what could it usefully achieve?

The scope needs to stay narrow.

“Terrorist and violent extremist content” is reasonably clear though there will be definitional questions to work through to strike the right balance in preventing the spread of such abhorrent material on the one hand, and maintaining free expression on the other. Upholding people’s rights needs to be at the core of the Call and what comes from it.

The targets need to be clear.

From the media release announcing the initiative, the focus is on “social media platforms”. I take that to mean companies like Facebook, Alphabet (through YouTube), Twitter and so on. These are big actors with significant audiences that can have a role in publishing or propagating access to the terrorist and violent extremist content the Call is aimed at. They have the highest chance of causing harm, in other words. It is a good thing the Call does not appear to target the entire Internet. This means the scale of action is probably achievable, because there are a relatively small and identifiable number of platforms of the requisite scale or reach.

But online media keeps changing so it will be difficult to set a clear target. I think that limiting ‘scale and reach’ to a small number of companies would be a problem, it would be very simple to work around. If there are worldwide rules on use of social media it would have to cover all social media to be effective.

The ask needs to be clear.

Most social media platforms have community standards that explicitly prohibit terrorist and violent extremist content, alongside many other things. If we assume for now that the standards are appropriate (a big assumption, one that needs more consideration later on), the Call’s ask needs to centre around the standards being consistently implemented and enforced by the platforms.

Working back from a “no content ever will breach these standards” approach and exploring how AI and machine tools, and human moderation, can help should be the focus of the conversation.

That’s not very clear to me.

There needs to be a sensible application of the ask.

Applying overly tight automated filtering would lead to very widespread overblocking. What if posting a Radio New Zealand story about the Sri Lanka attacks over the weekend on Facebook was automatically blocked? Imagine if a link to a donations site for the victims of the Christchurch attacks led to the same outcome? How about sharing a video of TV news reports on either story?

This is why automation is unlikely to be the whole answer. We also will need to think through carefully about how any action arising from the Call won’t give cover for problematic actions by countries with no commitment to the free, open and secure internet.

It will be extremely difficult to get consistent agreement on effective control between all social media companies and all countries. If there are variances there will be exploitation by terrorists and promoters of violence.

Success needs measuring and failure needs to have a cost.

There needs to be effective monitoring that the commitments are being met. A grand gesture followed by nothing changing isn’t an acceptable outcome. If social media platforms don’t live up to the commitments that they make, the Call can be a place where governments agree that a kind of cost can be imposed. The simplest and most logical costs would tend to be financial (e.g. a reduction in the protection such platforms have from liability for content posted on them). But as a start, the Call can help harmonise initial thinking on potential national and regional regulation around these issues.

How could cost penalties be applied fairly and effectively where there is a huge range of sizes and budgets of social media companies? A million dollars is small change for Facebook, a thousand dollars would be a big deal for me.

The discussion needs to be inclusive.

Besides governments and the social media platforms, the broader technology sector and various civil society interests should be in the room helping to discuss and finalise the Call. This is because the long history of Internet policy-making shows that you get the best outcomes when all the relevant voices are in the room. Civil society plays a crucial role in helping make sure blind spots on the part of big players like government and platforms aren’t overlooked. We can’t see a situation where governments and tech companies finalise the call, and the tech sector and civil society are only brought in on the “how to implement” stage.

I don’t know how you could get close to including all relevant voices. The Internet is huge, vast.

A Call that took account of these six thoughts would have a chance of success. To achieve change it would need one more crucial point, which is why the idea of calling countries, civil society and tech platforms together is vital.

I think it is going to take a lot more than this. It’s a huge challenge.

 

Judith Collins supporting marriage equality

Twittering suggests Judith Collins has expressed support for marriage equality:

Justice minister backs marriage equality. Awesome. Confusing but awesome.

@jordantcarter memory is that’s she opposed civil unions because it wasn’t marriage. Some ppl will be surprised by some of the yes votes!

(edit: corrected he to she)

@KevinHague @jordantcarter it’s great JC is supportive

10:58 PM – 20 Aug 12

The only news reference I could find seems to confirm this:

Justice minister now pro gay equality, marriage

In a remarkable change in her attitude to equality for glbt people, senior government figure and Minister of Justice Judith Collins has voiced her support for marriage equality, legal adoption by same sex couples and legal recognition of a person’s gender identity which may have changed since their birth.

Speaking briefly to GayNZ.com Daily News yesterday afternoon…Collins said of marriage equality: “I’ve got no problem with it.”

On enabling same-sex couples to legally adopt children she responded: “I’ve got no particular problem with that either.”

And regarding appropriate recognition of minority gender identities: “It doesn’t hurt for us to acknowledge people’s diversity… it actually helps us.”

Asked if she felt glbt people in New Zealand are generally getting, in the words of the Human Rights Commission conference theme, “a fair go for all,” Collins said: “There are some issues that need to be dealt with. Frankly it would be really nice if we could look at people as human beings rather than be always saying ‘you can’t do that because you’re gay’ or whatever.”