Helen Clark Foundation report: Harmful Content on Social Networks

Helen Clark backs Jacinda Ardern’s Christchurch call: ‘All key players should be there’

Former prime minister Helen Clark says those who aren’t attending the “incredibly important” Christchurch call meeting in Paris are saying more about themselves than the summit itself.

Speaking to Stuff ahead of releasing a report on reducing social media harm from her new think tank, Clark said the call was a “huge deal” and “all the key players should be there”.

“I think this says more about the people who are not going than the call itself. It’s an incredibly important call and why would those people not be there. That’s what will get the interest,” Clark said.

She said getting an issue like this on the table at a G7 meeting was “unprecedented” for New Zealand and praised Ardern for carrying on momentum.

“I think that New Zealand is going to be defined not the by the horrific attack itself, but he way she has responded. New Zealand is making a significant statement about who it is and what needs to be done locally and globally.”

The Helen Clark Foundation report key recommendation:

We recommend a legislative response is necessary to address the spread of terrorist and harmful content online. This is because ultimately there is a profit motive for social media companies to spread ‘high engagement’ content even when it is offensive, and a long standing laissez faire culture inside the companies concerned which is resistant to regulation.


Harmful Content on Social Networks

Executive Summary

Anti-social media: reducing the spread of harmful content on social media networks

  • In the wake of the March 2019 Christchurch terrorist attack, which was livestreamed in an explicit attempt to foster support for white supremacist beliefs, it is clear that there is a problem with regard to regulating and moderating abhorrent content on social media. Both governments and social media companies could do more.
  • Our paper discusses the following issues in relation to what we can do to address this in a New Zealand context; touching on what content contributes to terrorist attacks, the legal status of that content, the moderation or policing of communities that give rise to it, the technical capacities of companies and police to
    identify and prevent the spread of that content, and where the responsibilities for all of this fall – with government, police, social media companies and individuals.
  • We recommend that the New Zealand Law Commission carry out a review of laws governing social media in New Zealand. To date, this issue is being addressed in a piecemeal fashion by an array of government agencies, including the Privacy Commission, the Ministry of Justice, the Department of Internal Affairs, and Netsafe.
  • Our initial analysis (which does not claim to be exhaustive) argues that while New Zealand has several laws in place to protect against the online distribution of harmful and objectionable content, there are significant gaps. These relate both to the regulation of social media companies and their legal obligations to reduce
    harm on their platforms and also the extent to which New Zealand law protects against hate speech based on religious beliefs and hate motivated crimes.
  • The establishment of the Royal Commission into the attack on the Christchurch Mosques on 15 March 2019 (the Royal Commission) will cover the use of social media by the attacker. However the Government has directed the Royal Commission not to inquire into, determine, or report in an interim or final way on issues related to social media
  • platforms, as per the terms of reference.As a result, we believe that this issue – of social media platforms – remains outstanding, and in need of a coordinated response. Our paper is an initial attempt to scope out what this work could cover.
  • In the meantime, we recommend that the Government meet with social media companies operating in New Zealand to agree on an interim Code of Conduct, which outlines key commitments from social media companies on what actions they will take now to ensure the spread of terrorist and other harmful content is caught quickly and its further dissemination is cut short in the future. Limiting access to the livestream feature is one consideration, if harmful content can genuinely not be detected.
  • We support the New Zealand Government’s championing of the issue of social media governance at the global level, and support the ‘Christchurch Call’ pledge to provide a clear and consistent framework to address the spread of terrorist and extremist content online.

Helen Clark was interviewed about this on Q&A last night.

 

Ardern mastery of detail and engaging on extremist use of social media

David Farrar writes that he was invited to attend a “dialogue” on the ” Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online” at the offices of InternetNZ on Friday 10 May. He was surprised by the engagement there by Jacinda Ardern, and he was impressed by how she handled things, and how she was “over all the detail of what is a very complex landscape which is an intersection of Internet architecture, free speech issues, social media companies, behavioural incentives and issues of market dominance”.

The purpose was “engagement” and to ” to build a unified sense of purpose on constructive measures to address violent extremist content online”.

This is stuff Governments do all the time. I’ve been to a lot of these.

I was a bit surprised when I got the agenda 48 hours before the meeting and read that the PM was attending the second half of the meeting for around half an hour. That was pretty unusual for a PM to attend a consultation meeting. I figured it was mainly for optics – allow for a photo op (which was mentioned in the agenda) and allow us to hear what the Government wants to achieve directly.

As the meeting resumed after the tea break, Jacinda walked in and sat down in the circle of chairs with us. I looked around the room for her minders (as I know a few of them), and there were none there. This is pretty rare. Normally a press secretary will always be with the PM, making sure they record what is said, and also an advisor to field technical questions.

As the discussion from the first session was summarised, the PM grabbed a piece of paper and started taking notes. Not a staff member, but the PM. Then the facilitator handed the meeting over to the PM. She actually chaired or facilitated the next session herself after a brief outline of what they are trying to do. As each person made a contribution, she responded with comments or followups and kept making notes.

It dawned on me that rather than this being the PM telling us what she is doing, she was genuinely engaging with those in the room for their ideas about various issues and complexities.

She was very much over the detail of what is a very complex landscape which is an intersection of Internet architecture, free speech issues, social media companies, behavioural incentives and issues of market dominance.

The combination of her mastery of detail, her actively seeking opinions and taking her own notes, her lack of staff in the room, and also the total lack of barriers between the PM and participants (all sitting around in a circle) made everyone in that room feel they were genuinely being useful, and this wasn’t just tick the box consultation. Her performance reminded me in fact of John Key at various events, as Key had a way of talking with an audience, rather than to an audience, that was first class.

This sounds very promising, both that social media issues related to violence and terrorism may have a chance of being addressed by international leaders and online media companies, and also that Ardern is growing into the job as Prime Minister and on some issues at least she is very capable of leading.

Members of ‘digital and media expert group’ respond

Yesterday members of the ‘Digital and media expert group’ advising on social media regulation revealed.

There was some interaction on this on Twitter with two of the members, Nat Torkington and Lizzie Marvelly.

@MatthewHootonNZ:

What are its objectives? What is Its work programme? It looks to me like a sinister Labour move so censor dissent, like they did with the Electoral Finance Bill.

@LizzieMarvelly responded with information that the Prime Minister’s office withheld from Hooton’s OIA request – what the objectives of the group are:

It is an informal group of tech sector, legal and media folks that can provide feedback on request to help the Government to make sure its work in this area is effective and well-informed. This is an important kaupapa, particularly given what happened in Chrischurch.

To be clear, by ‘this area’, I mean social media policy proposals.

@MatthewHootonNZ:

There is no such thing as an “informal” group if it is set up by DPMC and the PM discusses it the day of its first meeting with the political editor of the NZ Herald.

Why haven’t you declared your involvement in it? How much have you been paid? What is the work programme? Has there been a second meeting?

At that point Marvelly disengaged from the discussion, but Torkington joined in.

@gnat (Torkington):

Oh hai, Lizzie. Is it normal for you to get this kind of pig-dog blind aggression? I’ve never encountered it before. It’s like being hassled by an uppity mall cop. “I know you think you’re a knight defender of Western democracy, but your cap gun and plastic badge fool nobody.”

Pig-dog blind aggression? Torkington’s lack of encountering what looks fairly reasonable questioning to me suggests that he is not much of an expert on social media, or politics. I wonder if he has ever watched Question Time in Parliament.

@AlisonMau:

It’s very normal, Nat. For Lizzie and lots of other women.

And men. While women like Marvelly are subject to some awful stuff, that’s not what happened here, so this is trying to swing the conversation to a different agenda.

Torkington:

I understood that intellectually, but this is my first time in the Flappy Asshole Blast Zone. And I know this is tame in comparison to threats of sexual violence, doxxing, families, professional fuckery, etc. that y’all get every day. You deserve a🏅for showing up every day!

Later in the day Marvelly got involved again.

If the expert advisory group had been announced and named by the Prime Minister, and it’s objectives revealed rather than kept secret, then this sideshow wouldn’t have happened.

There are benefits with being open and transparent, but the current Government seems intent on avoiding walking that talk.

 

‘Digital and media expert group’ advising on social media regulation revealed

It has taken an Official Information Act request to reveal the members of a digital and media expert group assembled by the Prime Minister to advise her on possible regulation of social media.

Information about the objectives of the group was withheld – “I have considered the public interest considerations”, but surely secrecy is not in the public interest here.

NZ Herald (6 April 2019): Ardern changes down a gear from speedy gun reform to social media landscape

The areas of policy in which Ardern will be more deliberately paced are in regulation of social media, and other issues that impinge on media generally, free speech and the free exchange of ideas. The effects would be more wide-ranging and could be insidious.

Ardern has put together a group of digital and media experts who met with her for the first time in Auckland yesterday to discuss what happened and may be a sounding board and think tank for future policy proposals.

NZ Herald (8 April 2019):  Jacinda Ardern calls for global approach to block harm on digital platforms

Prime Minister Jacinda Ardern says the global community should “speak with one voice” when it comes to blocking harmful content on social media platforms.

Ardern has criticised the role of social media in the Christchurch terror attack on March 15, and she met with a group of digital media experts in Auckland on Friday to learn more about the issue.

“I wanted to make sure I had the views of those that work in the [social media] space, particularly given that questions are being raised around what role New Zealand could and should play in this debate at an international level.”

Many people ‘work in the [social media] space’. Meeting with an unnamed group is only going to get a small number of views.

She said she would be happy to say who she met with, but would seek their permission to do so first.

So if people she meets with don’t want to be revealed Ardern would keep this secret?

Matthew Hooton spotted the reference to the ‘expert group’ so put in an OIA request asking who the experts were, and also who had been invited but couldn’t attend. Yesterday he received a response.

Official Information Act request relating to the digital and media expert group the Prime Minister met with on 5 April 2019.

The group provides an informal way to test policy ideas and inform government thinking about its response to the role of social media in the events of 15 March 2019 in Christchurch. The people currently involved are:

  • Jordan Carter, Chief Executive, Internet NZ
  • Nat Torkington, technologist
  • Miriyana Alexander, Premium Content Editor, NZME
  • Rick Shera, Internet and Digital Business Law Partner, Lowndes Jordan
  • Michael Wallmansberger, cybersecurity professional, independent director; Chair of the CERT NZ Establishment Advisory Board
  • Victoria Maclennan, Managing Director, MD OptimalBI Ltd; Chair of the Digital Economy and Digital Inclusion Ministerial Advisory Group; Co-Chair, NZRise
  • John Wesley-Smith, GL Regulatory Affairs, Spark
  • Lizzie Marvelly, NZ Herald columnist, Villainesse.com co-founder and editor

Not all people involved in the group attended the meeting on Friday, 5 April 20129.

The Office and the department of the Prime Minister and Cabinet assembled the group to have a mix of technology sector, media and legal expertise. The Government Chief Digital Officer and the Minister for Government Digital Services, Hon Dr Megan Woods, provided input on their selection.

To the question for “5. Information on future meetings and the objectives and work programme for the group”:

With regards to question five no formal work programme has been established.

Information was withheld on future meetings and the objectives, and also on these requests:

  • What were the objectives for the group at it’s first meeting?
  • All notes taken by officials or ministerial staff at the first meeting.

So until now we had a semi-secret advisory group, and the objectives and work programme are still secret.

What happened to Ardern’s Government’s promises of openness and transparency?

Ardern’s Chief of Staff closed his OIA response with:

In making my decision, I have considered the public interest considerations in section 9(1) of the Act.

From the Act:

9 Other reasons for withholding official information

(1) Where this section applies, good reason for withholding official information exists, for the purpose of section 5, unless, in the circumstances of the particular case, the withholding of that information is outweighed by other considerations which render it desirable, in the public interest, to make that information available.

I would have thought that it was desirable in the public interest for discussions on social media regulation to be as open as possible.

Social media is used by and affects many people. This sort of secrecy on an advisory group on possible social media regulation is alarming.

Consultation should be as wide as possible, and given the medium involved, that should be easy to do.


Martyn Bradbury makes a reasonable point: Ummmmmmmmmmmmmmmmmmmmm shouldn’t an advisory board to the PM on censoring the internet require some academics and experts on civil rights and freedom of speech?

Being manipulated on social media

A series from Smarter Every Day on how people, you included perhaps, are being manipulated on social media.

Manipulating the YouTube Algorithm – (Part 1/3)

Twitter Platform Manipulation – (Part 2/3)

People are Manipulating You on Facebook – (Part 3-3)

x

Jordan Carter on how to eliminate terrorist and violent material online

Jordan Carter, CEO of InternetNZ, has some ideas on how to help make Jacinda Ardern’s ‘Christchurch call’ work.

(I really wonder if labelling the attempt by Ardern to get social media companies to ‘eliminate’ terrorism online the ‘Christchurch call’ is a good idea. I think it is inappropriate.)

The Spinoff:  How to stop the ‘Christchurch Call’ on social media and terrorism falling flat

If we take that goal of eliminating terrorist and violent material online as a starting point, what could such a pledge look like, and what could it usefully achieve?

The scope needs to stay narrow.

“Terrorist and violent extremist content” is reasonably clear though there will be definitional questions to work through to strike the right balance in preventing the spread of such abhorrent material on the one hand, and maintaining free expression on the other. Upholding people’s rights needs to be at the core of the Call and what comes from it.

The targets need to be clear.

From the media release announcing the initiative, the focus is on “social media platforms”. I take that to mean companies like Facebook, Alphabet (through YouTube), Twitter and so on. These are big actors with significant audiences that can have a role in publishing or propagating access to the terrorist and violent extremist content the Call is aimed at. They have the highest chance of causing harm, in other words. It is a good thing the Call does not appear to target the entire Internet. This means the scale of action is probably achievable, because there are a relatively small and identifiable number of platforms of the requisite scale or reach.

But online media keeps changing so it will be difficult to set a clear target. I think that limiting ‘scale and reach’ to a small number of companies would be a problem, it would be very simple to work around. If there are worldwide rules on use of social media it would have to cover all social media to be effective.

The ask needs to be clear.

Most social media platforms have community standards that explicitly prohibit terrorist and violent extremist content, alongside many other things. If we assume for now that the standards are appropriate (a big assumption, one that needs more consideration later on), the Call’s ask needs to centre around the standards being consistently implemented and enforced by the platforms.

Working back from a “no content ever will breach these standards” approach and exploring how AI and machine tools, and human moderation, can help should be the focus of the conversation.

That’s not very clear to me.

There needs to be a sensible application of the ask.

Applying overly tight automated filtering would lead to very widespread overblocking. What if posting a Radio New Zealand story about the Sri Lanka attacks over the weekend on Facebook was automatically blocked? Imagine if a link to a donations site for the victims of the Christchurch attacks led to the same outcome? How about sharing a video of TV news reports on either story?

This is why automation is unlikely to be the whole answer. We also will need to think through carefully about how any action arising from the Call won’t give cover for problematic actions by countries with no commitment to the free, open and secure internet.

It will be extremely difficult to get consistent agreement on effective control between all social media companies and all countries. If there are variances there will be exploitation by terrorists and promoters of violence.

Success needs measuring and failure needs to have a cost.

There needs to be effective monitoring that the commitments are being met. A grand gesture followed by nothing changing isn’t an acceptable outcome. If social media platforms don’t live up to the commitments that they make, the Call can be a place where governments agree that a kind of cost can be imposed. The simplest and most logical costs would tend to be financial (e.g. a reduction in the protection such platforms have from liability for content posted on them). But as a start, the Call can help harmonise initial thinking on potential national and regional regulation around these issues.

How could cost penalties be applied fairly and effectively where there is a huge range of sizes and budgets of social media companies? A million dollars is small change for Facebook, a thousand dollars would be a big deal for me.

The discussion needs to be inclusive.

Besides governments and the social media platforms, the broader technology sector and various civil society interests should be in the room helping to discuss and finalise the Call. This is because the long history of Internet policy-making shows that you get the best outcomes when all the relevant voices are in the room. Civil society plays a crucial role in helping make sure blind spots on the part of big players like government and platforms aren’t overlooked. We can’t see a situation where governments and tech companies finalise the call, and the tech sector and civil society are only brought in on the “how to implement” stage.

I don’t know how you could get close to including all relevant voices. The Internet is huge, vast.

A Call that took account of these six thoughts would have a chance of success. To achieve change it would need one more crucial point, which is why the idea of calling countries, civil society and tech platforms together is vital.

I think it is going to take a lot more than this. It’s a huge challenge.

 

Ardern and Macron to attempt to “to eliminate terrorist and violent extremist content online”

New Zealand Prime Minister Jacinda Ardern and French Prime Minister Emmanuel Macron will chair a meeting in Paris next month which will seek to “to eliminate terrorist and violent extremist content online”.


NZ and France seek to end use of social media for acts of terrorism

New Zealand and France announced today that the two nations will bring together countries and tech companies in an attempt to bring to an end the ability to use social media to organise and promote terrorism and violent extremism, in the wake of the March 15 terrorist attacks in Christchurch New Zealand.

The meeting will take place in Paris on May 15, and will be co-chaired by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron.

The meeting aims to see world leaders and CEOs of tech companies agree to a pledge called the ‘Christchurch Call’ to eliminate terrorist and violent extremist content online.

The meeting will be held alongside the “Tech for Humanity” meeting of G7 Digital Ministers, of which France is the Chair, and France’s separate “Tech for Good” summit, both on 15 May. Jacinda Ardern will also meet with civil society leaders on 14 May to discuss the content of the Call.

“The March 15 terrorist attacks saw social media used in an unprecedented way as a tool to promote an act of terrorism and hate. We are asking for a show of leadership to ensure social media cannot be used again the way it was in the March 15 terrorist attack,” Jacinda Ardern said.

“We’re calling on the leaders of tech companies to join with us and help achieve our goal of eliminating violent extremism online at the Christchurch Summit in Paris.

“We all need to act, and that includes social media providers taking more responsibility for the content that is on their platforms, and taking action so that violent extremist content cannot be published and shared.

“It’s critical that technology platforms like Facebook are not perverted as a tool for terrorism, and instead become part of a global solution to countering extremism. This meeting presents an opportunity for an act of unity between governments and the tech companies.

“In the wake of the March 15 attacks New Zealanders united in common purpose to ensure such attacks never occur again. If we want to prevent violent extremist content online we need to take a global approach that involves other governments, tech companies and civil society leaders

“Social media platforms can connect people in many very positive ways, and we all want this to continue.

“But for too long, it has also been possible to use these platforms to incite extremist violence, and even to distribute images of that violence, as happened in Christchurch. This is what needs to change.”


RNZ: ‘This is about preventing violent extremism and terrorism online’

Ms Ardern told Morning Report that since the attacks, there had been a clear call for New Zealand to take on a leadership role in combating violent extremism online.

“There is a role for New Zealand to play now in ensuring we eradicate that kind of activity from social media, in particular to prevent it from ever happening again. We can’t do that alone,” she said.

“This isn’t about freedom of expression, this is about preventing violent extremism and terrorism online.

“I don’t think anyone would argue that the terrorist, on the 15th of March, had a right to livestream the murder of 50 people, and that is what this call is very specifically focussed on”.

Ms Ardern said she’s met with a number of tech CEOs, including Facebook’s Mark Zuckerberg, and held meetings with executives from Microsoft, Twitter, and Google.

“When we actually distil this down, no tech company, no country, wants to see online platforms used to perpetuate violent extremism or terrorism. We all have a common starting point. It all then comes down to what it is we are each prepared to do about it.”

Technology correspondent Bill Bennett…

…said a voluntary approach was the only option for getting technology companies to sign up to a crackdown on terrorist behaviour through social media.

“They don’t see themselves as being responsible for content that’s published on their sites anyway. They see themselves as being some kind of neutral thing”.

National Leader Simon Bridges…

…questioned whether the global conversation would translate into anything meaningful.

He was cynical about why Ms Ardern was focusing on the issue.

“I think New Zealanders will say, hey, if you’re not also progressing policy, plans and actions around our housing, health, and education, why is this the big thing?

“Is it just a distraction tactic?”.

New Zealand needed to be cautious about going down a path that would see the casual erosion of freedoms, Mr Bridges said.

NZ Herald: Prime Minister Jacinda Ardern to lead global attempt to shutdown social media terrorism

Speaking to Newstalk ZB this morning, Ardern said she was confident all major social media companies would sign up to the Christchurch call.

“We have been working on something behind the scenes for some time now, since the 15th of March. I have also recently had calls with a handful of chief executives.”

The call, she said, would place the onus on Governments, in terms of their ability to regulate, as well as on the social media companies themselves.

“I think that’s where we need to move; this can’t just be about individual country’s [ability to] regulate because this is obviously global technology and we need to have those companies accept responsibility as well.”

She said that the principals of a free, open and secure internet would “absolutely be maintained”.

“If we want to prevent violent extremist content online we need to take a global approach that involves other governments, tech companies and civil society leaders”.

“Social media platforms can connect people in many very positive ways, and we all want this to continue.”

But she said for too long it has been possible to use social media platforms to incite extremist violence, and even to distribute images of that violence, as happened in Christchurch.

“This is what needs to change.”

A worthy aim, but it will be difficult to come up with an effective means of preventing the use of social media by terrorists but maintaining the freedom of use of social media generally.

And even if social media companies do put effective control mechanisms in place, it is likely that those seeking to promote and perpetuate violence online will find ways around the controls.

Fine for Ardern and Macron to be seen to be trying to do something about it, but being seen to be trying, and doing anything effective ongoing, will be a big challenge.

Social media AI big on revenue, hopeless on terroroism

There has been a lot of focus on the part played by social media in publicising the Christchurch terror attacks, in particular the live streaming and repeating uploading of the killer’s video of his massacre. Facebook and YouTube were prominent culprits. Twitter and others have also been slammed in the aftermath.

Artificial Intelligence technology failed – it was originally targeted at money making, and has not adapted to protections for harmful content.

RNZ –  French Muslim group sues Facebook, YouTube over Christchurch mosque shooting footage

One of the main groups representing Muslims in France said on Monday (local time) it was suing Facebook and YouTube, accusing them of inciting violence by allowing the streaming of footage of the Christchurch massacre on their platforms.

The French Council of the Muslim Faith (CFCM) said the companies had disseminated material that encouraged terrorism and harmed the dignity of human beings. There was no immediate comment from either company.

Facebook said it raced to remove hundreds of thousands of copies.

But a few hours after the attack, footage could still be found on Facebook, Twitter and Alphabet Inc’s YouTube, as well as Facebook-owned Instagram and Whatsapp.

Abdallah Zekri, president of the CFCM’s Islamophobia monitoring unit, said the organisation had launched a formal legal complaint against Facebook and YouTube in France.

Paul Brislen (RNZ) – Christchurch mosque attacks: How to regulate social media

Calls for social media to be regulated have escalated following their failure to act decisively in the public interest during the terror attacks in Christchurch.

The cry has been growing ever louder over the past few years. We have seen Facebook refuse to attend UK parliamentary sessions to discuss its role in the Cambridge Analytica affair, watched its CEO testify but not exactly add any clarity to inquiries into Russian interference in the US election, and seen the company accused of failing to combat the spread of hate speech amid violence in Myanmar.

US representatives are now openly talking about how to break up the company and our own prime minister has suggested that if Facebook can’t find a way to manage itself, she will.

But how do we regulate companies that don’t have offices in New Zealand (aside from the odd sales department) and that base their rights and responsibilities on another country’s legal system?

And if we are going to regulate them, how do we do it in such a way as to avoid trampling on users’ civil rights but makes sure we never see a repeat of the events of 15 March?

It’s going to be very difficult, but it has to be tried.

Richard MacManus (Newsroom) – The AI failures of Facebook & YouTube

Over the past couple of years, Facebook CEO Mark Zuckerberg has regularly trumpeted Facebook’s prowess in AI technology – in particular as a content moderation tool. As for YouTube, it’s owned by probably the world’s most technologically advanced internet company: Google.

Yet neither company was able to stop the dissemination of an appalling terrorist video, despite both claiming to be market leaders in advanced artificial intelligence.

Why is this a big deal? Because the technology already exists to shut down terror content in real-time. This according to Kalev Leetaru, a Senior Fellow at the George Washington University Center for Cyber & Homeland Security.

“We have the tools today to automatically scan all of the livestreams across all of our social platforms as they are broadcast in real time,” Leetaru wrote last week. Further, he says, these tools “are exceptionally capable of flagging a video the moment a weapon or gunfire or violence appears, pausing it from public view and referring it for immediate human review”.

So the technology exists, yet Facebook has admitted its AI system failed. Facebook’s vice president of integrity, Guy Rosen, told Stuffthat “this particular video did not trigger our automatic detection systems.”

I have seen one reason for this is that the live stream of the killings was too similar to video of common killing games. But there is one key difference – Live.

According to Leetaru, this could also have been prevented by current content hashing and content matching technologies.

Content hashing basically means applying a digital signature to a piece of content. If another piece of content is substantially similar to the original, it can easily be flagged and deleted immediately. As Leetaru notes, this process has been successfully used for years to combat copyright infringement and child pornography.

The social platforms have “extraordinarily robust” content signature matching, says Leetaru, “able to flag even trace amounts of the original content buried under an avalanche of other material”.

But clearly, this approach either wasn’t used by Facebook or YouTube to prevent distribution of the Christchurch terrorist’s video, or if it was used it had an unacceptably high failure rate.

Leetaru’s own conclusion is damning for Facebook and YouTube.

“The problem is that these approaches have substantial computational cost associated with them and when used in conjunction with human review to ensure maximal coverage, would eat into the companies’ profits,” he says.

Profit versus protection. It appears that the social media companies need to be pushed more towards protection.

In the aftermath of this tragedy, I’ve also wondered if more could have been done to identify, monitor and shut down the terrorist’s social media presence – not to mention alert authorities – before he committed his monstrous crime.

There’s certainly a case to be made for big tech companies to work closely with government intelligence agencies, at least for the most obvious and extreme instances of people posting hate content.

In an email exchange, I asked Leetaru what he thinks of social platforms working more closely with governments on policing hate content.

“So, the interaction between social platforms and governments is a complex space,” he replied. “Governments already likely use court orders to compel the socials to provide them data on ordinary users and dissidents. And if socials work with one government to remove “terrorist” users, other governments are going to demand the same abilities, but they might define a “terrorist” as someone who criticizes the government or “threatens national stability” by publishing information that undermines the government – like corruption charges. So, socials are understandably loathe to work more closely with governments, though they do [already] work closely with many Western governments.”

But the problem is not just with ‘terrors users’. Many individuals contribute to stoking hate and intolerance.

Here in New Zealand, there have already been people arrested and tried in court under the Objectionable Publications Act for sharing the terrorist video.

The problem is, hundreds of other people shared the video using anonymous accounts on YouTube, Reddit and other platforms where a real name isn’t required. Could AI tech help identify these anonymous cowards, then ban them from social media and report them to police?

Again, I recognise there are significant privacy implications to unmasking anonymous accounts. But I think it’s worth at least having the discussion.

“In many countries, there are limits to what you can do,” said Leetaru when I asked him about this. “Here in the US, social platforms are private companies. They can remove the content, but there are few laws restricting sharing the content – so there’s not much that could be done against those individuals legally.”

He also warned against naming and shaming anonymous trolls.

“Name and shame is always dangerous, since IP addresses are rotated regularly by ISPs – meaning your IP today might be someone across town’s tomorrow. And bad actors often use VPNs or other means to conceal their activity, including using their neighbour’s wifi or a coffee shop.”

Challenging times.

Online media giants are being challenged to improve their control of terrorism, violence, personal attacks  and harassment.

Online minnows have a role to play. It’s going to take some thought and time on how to manage this.

This will be a work in progress for everyone who has a responsibility for the publication of material from individuals, many of them anonymous.

 

Russian influence in 2016 US election a social media facilitated democratic and social war

Foreign interference in a country’s election is a serious matter. A US Senate Intelligence Committee report details Russian efforts to influence the outcome of the 2016 presidential election using social media.

NY Times: Russian 2016 Influence Operation Targeted African-Americans on Social Media

The Russian influence campaign on social media in the 2016 election made an extraordinary effort to target African-Americans, used an array of tactics to try to suppress turnout among Democratic voters and unleashed a blizzard of activity on Instagram that rivaled or exceeded its posts on Facebook, according to a report produced for the Senate Intelligence Committee.

The report adds new details to the portrait that has emerged over the last two years of the energy and imagination of the Russian effort to sway American opinion and divide the country, which the authors said continues to this day.

“Active and ongoing interference operations remain on several platforms,” says the report, produced by New Knowledge, a cybersecurity company based in Austin, Texas, along with researchers at Columbia University and Canfield Research LLC. One continuing Russian campaign, for instance, seeks to influence opinion on Syria by promoting Bashar al-Assad, the Syrian president and a Russian ally in the brutal conflict there.

The New Knowledge report, which was obtained by The New York Times in advance of its scheduled release on Monday, is one of two commissioned by the Senate committee on a bipartisan basis. They are based largely on data about the Russian operations provided to the Senate by Facebook, Twitter and the other companies whose platforms were used.

The second report was written by the Computational Propaganda Project at Oxford University along with Graphika, a company that specializes in analyzing social media. The Washington Post first reported on the Oxford report on Sunday.

The Russian influence campaign in 2016 was run by a St. Petersburg company called the Internet Research Agency, owned by a businessman, Yevgeny V. Prigozhin, who is a close ally of President Vladimir V. Putin of Russia. Mr. Prigozhin and a dozen of the company’s employees were indicted last February as part of the investigation of Russian interference by Robert S. Mueller III, the special counsel.

So it would seem that Mueller has been doing some important and successful investigations.

Both reports stress that the Internet Research Agency created social media accounts under fake names on virtually every available platform. A major goal was to support Donald Trump, first against his Republican rivals in the presidential race, then in the general election, and as president since his inauguration.

This wasn’t an anti-Democrat pro-Republican campaign of interference in the election, but also a pro-trump anti-Republican opponent campaign. So it started with interference in democratic selection processes of the Republican Party, and once that was successful it became an anti-Hillary Clinton and Anti-Democrat campaign.

US democracy was already in a poor state, dominated by monied interests, but it has now been trashed further by a foreign government.

And because some people got the election outcome the wanted they make excuses and ignore the serious nature of this interference.

The Russian campaign was the subject of Senate hearings last year and has been widely scrutinized by academic experts. The new reports largely confirm earlier findings: that the campaign was designed to attack Hillary Clinton, boost Mr. Trump and exacerbate existing divisions in American society.

The interference aims also included trying to divide and trash US society.

Questions still need to be answered about why Trump was aided in the candidate selection process and the presidential election. There are claims and indications that the Trump side saw financial and power rewards.

Did the Russians see a potential puppet whose strings they could pull to get US policies that favoured Russia? Or did they see an opportunity to diminish the power of the US by dividing their society? Possibly both.

The threats of nuclear war and the standoff of the Cold War are now history. Russia versus the United States has become a social media facilitated democratic and social war.

But Trump is president and that’s all that matters, the end justifies the means?

The problem with this is that the end is nigh, not done and dusted.

Impact of social media on mental health of young people

This is from The Economist on showing the effect of various social media platforms on young British people.

According to a survey in 2017 by the Royal Society for Public Health, Britons aged 14-24 believe that Facebook, Instagram, Snapchat and Twitter have detrimental effects on their wellbeing. On average, they reported that these social networks gave them extra scope for self-expression and community-building. But they also said that the platforms exacerbated anxiety and depression, deprived them of sleep, exposed them to bullying and created worries about their body image and “FOMO” (“fear of missing out”). Academic studies have found that these problems tend to be particularly severe among frequent users.

From How heavy use of social media is linked to mental illness