Social media AI big on revenue, hopeless on terroroism

There has been a lot of focus on the part played by social media in publicising the Christchurch terror attacks, in particular the live streaming and repeating uploading of the killer’s video of his massacre. Facebook and YouTube were prominent culprits. Twitter and others have also been slammed in the aftermath.

Artificial Intelligence technology failed – it was originally targeted at money making, and has not adapted to protections for harmful content.

RNZ –  French Muslim group sues Facebook, YouTube over Christchurch mosque shooting footage

One of the main groups representing Muslims in France said on Monday (local time) it was suing Facebook and YouTube, accusing them of inciting violence by allowing the streaming of footage of the Christchurch massacre on their platforms.

The French Council of the Muslim Faith (CFCM) said the companies had disseminated material that encouraged terrorism and harmed the dignity of human beings. There was no immediate comment from either company.

Facebook said it raced to remove hundreds of thousands of copies.

But a few hours after the attack, footage could still be found on Facebook, Twitter and Alphabet Inc’s YouTube, as well as Facebook-owned Instagram and Whatsapp.

Abdallah Zekri, president of the CFCM’s Islamophobia monitoring unit, said the organisation had launched a formal legal complaint against Facebook and YouTube in France.

Paul Brislen (RNZ) – Christchurch mosque attacks: How to regulate social media

Calls for social media to be regulated have escalated following their failure to act decisively in the public interest during the terror attacks in Christchurch.

The cry has been growing ever louder over the past few years. We have seen Facebook refuse to attend UK parliamentary sessions to discuss its role in the Cambridge Analytica affair, watched its CEO testify but not exactly add any clarity to inquiries into Russian interference in the US election, and seen the company accused of failing to combat the spread of hate speech amid violence in Myanmar.

US representatives are now openly talking about how to break up the company and our own prime minister has suggested that if Facebook can’t find a way to manage itself, she will.

But how do we regulate companies that don’t have offices in New Zealand (aside from the odd sales department) and that base their rights and responsibilities on another country’s legal system?

And if we are going to regulate them, how do we do it in such a way as to avoid trampling on users’ civil rights but makes sure we never see a repeat of the events of 15 March?

It’s going to be very difficult, but it has to be tried.

Richard MacManus (Newsroom) – The AI failures of Facebook & YouTube

Over the past couple of years, Facebook CEO Mark Zuckerberg has regularly trumpeted Facebook’s prowess in AI technology – in particular as a content moderation tool. As for YouTube, it’s owned by probably the world’s most technologically advanced internet company: Google.

Yet neither company was able to stop the dissemination of an appalling terrorist video, despite both claiming to be market leaders in advanced artificial intelligence.

Why is this a big deal? Because the technology already exists to shut down terror content in real-time. This according to Kalev Leetaru, a Senior Fellow at the George Washington University Center for Cyber & Homeland Security.

“We have the tools today to automatically scan all of the livestreams across all of our social platforms as they are broadcast in real time,” Leetaru wrote last week. Further, he says, these tools “are exceptionally capable of flagging a video the moment a weapon or gunfire or violence appears, pausing it from public view and referring it for immediate human review”.

So the technology exists, yet Facebook has admitted its AI system failed. Facebook’s vice president of integrity, Guy Rosen, told Stuffthat “this particular video did not trigger our automatic detection systems.”

I have seen one reason for this is that the live stream of the killings was too similar to video of common killing games. But there is one key difference – Live.

According to Leetaru, this could also have been prevented by current content hashing and content matching technologies.

Content hashing basically means applying a digital signature to a piece of content. If another piece of content is substantially similar to the original, it can easily be flagged and deleted immediately. As Leetaru notes, this process has been successfully used for years to combat copyright infringement and child pornography.

The social platforms have “extraordinarily robust” content signature matching, says Leetaru, “able to flag even trace amounts of the original content buried under an avalanche of other material”.

But clearly, this approach either wasn’t used by Facebook or YouTube to prevent distribution of the Christchurch terrorist’s video, or if it was used it had an unacceptably high failure rate.

Leetaru’s own conclusion is damning for Facebook and YouTube.

“The problem is that these approaches have substantial computational cost associated with them and when used in conjunction with human review to ensure maximal coverage, would eat into the companies’ profits,” he says.

Profit versus protection. It appears that the social media companies need to be pushed more towards protection.

In the aftermath of this tragedy, I’ve also wondered if more could have been done to identify, monitor and shut down the terrorist’s social media presence – not to mention alert authorities – before he committed his monstrous crime.

There’s certainly a case to be made for big tech companies to work closely with government intelligence agencies, at least for the most obvious and extreme instances of people posting hate content.

In an email exchange, I asked Leetaru what he thinks of social platforms working more closely with governments on policing hate content.

“So, the interaction between social platforms and governments is a complex space,” he replied. “Governments already likely use court orders to compel the socials to provide them data on ordinary users and dissidents. And if socials work with one government to remove “terrorist” users, other governments are going to demand the same abilities, but they might define a “terrorist” as someone who criticizes the government or “threatens national stability” by publishing information that undermines the government – like corruption charges. So, socials are understandably loathe to work more closely with governments, though they do [already] work closely with many Western governments.”

But the problem is not just with ‘terrors users’. Many individuals contribute to stoking hate and intolerance.

Here in New Zealand, there have already been people arrested and tried in court under the Objectionable Publications Act for sharing the terrorist video.

The problem is, hundreds of other people shared the video using anonymous accounts on YouTube, Reddit and other platforms where a real name isn’t required. Could AI tech help identify these anonymous cowards, then ban them from social media and report them to police?

Again, I recognise there are significant privacy implications to unmasking anonymous accounts. But I think it’s worth at least having the discussion.

“In many countries, there are limits to what you can do,” said Leetaru when I asked him about this. “Here in the US, social platforms are private companies. They can remove the content, but there are few laws restricting sharing the content – so there’s not much that could be done against those individuals legally.”

He also warned against naming and shaming anonymous trolls.

“Name and shame is always dangerous, since IP addresses are rotated regularly by ISPs – meaning your IP today might be someone across town’s tomorrow. And bad actors often use VPNs or other means to conceal their activity, including using their neighbour’s wifi or a coffee shop.”

Challenging times.

Online media giants are being challenged to improve their control of terrorism, violence, personal attacks  and harassment.

Online minnows have a role to play. It’s going to take some thought and time on how to manage this.

This will be a work in progress for everyone who has a responsibility for the publication of material from individuals, many of them anonymous.

 

Harassment of Muslims continues

While there has been a huge amount of sympathy and support shown for the Muslim community in New Zealand, there are claims of continued harassment of Muslims, especially Muslim women. And attacks on Muslims continue online.

Newshub:  Jacinda Ardern ‘devastated’ as anti-Muslim attacks continue after Christchurch shooting

Most of what we’ve seen so far from the public toward the Muslim community has been love. But Anjum Rahman from the Islamic Council of Women told Newshub hatred is around as Muslims are reporting being threatened even since the terror attack.

“People are having people pretend to shoot them – ripping hijab off women,” Manning said.

When confronted with this on Monday, Prime Minister Jacinda Ardern said: “I think it’s devastating to know that when a community has been the subject of a direct attack like this that they would then be subject to threats.”

The Guardian has reported a 593 percent increase in anti-Muslim hate crimes in the UK in the aftermath of the Christchurch shooting.

That’s an alarming reaction to the Christchurch attacks in the UK, I think it’s reasonable to assume that that is in part fed by online abuse.

Police couldn’t give Newshub data on any potential increase in New Zealand, but the Prime Minister is urging anyone who has experienced attacks or threats to report them.

“Please report it – they are taking them seriously, they are following them up,” she said.

It seems every single threat is now being treated that much more seriously.

As they should be.

There have been positive changes. NZ Herald:  A changed world after Christchurch mosque attacks

An Auckland Muslim woman has described how her world has changed since the Christchurch terror attacks, which have helped unite the country and counter racial hatred.

Fijian-born mother-of-three Neelufah Hannif was once called a “curry muncher” and for years felt too uncomfortable to wear her hijab to work.

But the 40-year-old public servant has sensed a shift in attitudes towards inclusiveness and racial harmony since an extremist gunman killed 50 Muslim worshippers in two mosque attacks on March 15.

“I think the last few days have shown that people are compassionate, they’ve shown empathy and they have grieved with the Muslim community. I think this is who we are, this is who we have always been and I hope this will continue.”

“New Zealanders have shown solidarity and it’s comforting to know we are ‘one’ and people are there for us,” she said.

But also from NZH – Trevor Richards: NZ in denial about its anti-Muslim racism

In France following the January 2015 attack, the catchphrase heard and seen everywhere in Paris had been “Je suis Charlie (I am Charlie)”.

Here, it was not an Islamic terrorist attack against citizens of a Western country, but an attack by a white nationalist extremist against Muslims at prayer. This difference is important in determining the responses of the two countries.

France’s response to both attacks contained an ugly underbelly. Islamic terrorists had been the attackers. In the week following the Charlie Hebdo attack, a total of 60 anti-Muslim incidents were reported.

In New Zealand, Muslims had been the victims. The immediate nationwide response was to support and embrace Muslim communities.

On the Sunday following the attack, two young Muslim women at an Auckland railway station were told to “go back to your f****** country”. For some in this country, they are not us.

Unlike France, such horrific events are new to us. It has been widely claimed that New Zealand will never be the same again. The good news is that life will “get back to normal”, as Norway seems to have after Breivik. But our image of ourselves as a small country at the bottom of the world, happily immune from extremist right-wing political psychopaths and the more vicious edges of world politics, has gone. That will inevitably change us in ways yet to be realised.

Like everywhere in the world New Zealand will never be free of racism, of religious animosity, of prejudice and of fear.

But we can all play a part making things better than they were before the attacks in Christchurch.