How to regulate the Internet (vaguely)

How to fix speech on the Internet? It will take a lot more than this.

Jordan Carter (chief executive, InternetNZ) and Konstantinos Komaitis (senior director, global policy development and strategy, at the Internet Society) give some general ideas on how the Internet might be regulated to try to prevent it from being exploited by terrorists and extremists – How to regulate the internet without shackling its creativity

At its most basic, the internet is a decentralised technology, a “network of networks” that spans the globe, moving vast amounts of data and services. Its infrastructure layer is where protocols and standards determine the flow of data and enable independent networks to inter-operate voluntarily. A healthy infrastructure layer keeps opportunities open for everyone, because it is where unhindered innovation happens; where we build the technologies and the businesses of tomorrow.

The Christchurch terrorist did not put up a server to broadcast the video. Instead, he used the tools offered by the platforms most of us enjoy innocently. In other words, he did not directly use the internet’s infrastructure layer, but applications that run on top of it.

And this is exactly where the disconnect is. Most new rules and government intervention are spurred by illegal content that happen on the top layer of the internet’s infrastructure – the applications layer, where content exists and proliferates. Yet these rules would have sweeping implications for the infrastructure layers as well.

Interfering with the infrastructure layer, even unintentionally, to fix problems at the content layer creates unintended consequences that hurts everyone’s ability to communicate legitimately and use the internet safely and securely. The internet is a general-purpose network, meaning it’s not tailored to specific uses and applications. It is designed to keep networking and content separate. Current regulatory thinking on how to address terrorist, extremist and, in general, illegal content is incompatible with this basic premise.

That’s why we urge all governments working to protect their citizens from future terrorist and extremist content to focus on the layer of the internet where the harm occurs. Seeking expertise is how governments should regulate in the internet, but including only certain companies in the process could be counterproductive. All this does is cement the market power of a few big actors while excluding other, critical stakeholders.

As world and tech industry leaders gather in France for the Christchurch Call, we ask them to focus on interventions that are FIT for purpose:

Fitting – proportionate, not excessive, mindful of and minimising negative and unintended consequences, and preserving the internet’s open, global, end-to-end architecture;

Informed – based on evidence and sound data about the scale and impact of the issues and how and where it is best to tackle them, using ongoing dialogue to deepen understanding and build consensus;

Targeted – aimed at the appropriate layer of the internet and minimising the impact on the infrastructure layer, whose openness and interoperability are the source of the internet’s unbounded creativity and a rich source of future human flourishing.

That’s ok as general advice, but it provides little in the way of specific ideas on how to regulate speech and media without stifling it’s strengths.

The biggest challenge remains – how to very quickly identify and restrict hate speech and use of the Internet by extremists, without impacting on the freedom to exchange information, ideas and artistry.

Even from my own very narrow experience I know that people intent on spreading messages that many people would object to can be very determined and go to some lengths to try to work around any restrictions imposed on them.

Kiwiblog recently put in place much more monitoring and clarified what was deemed unacceptable speech, but those stated restrictions were quickly flouted, so offending comments must be being passed by people now doing the moderating.

It will require either some very smart algorithms that are able to adapt to attempts to work around them,  or a lot of monitoring and occasional intervention that would require many people all with similar levels of good judgment.

Neither approach will be perfect. I have concerns that rushing to restrict bad speech will increase impediments for acceptable speech.