Social media AI big on revenue, hopeless on terroroism

There has been a lot of focus on the part played by social media in publicising the Christchurch terror attacks, in particular the live streaming and repeating uploading of the killer’s video of his massacre. Facebook and YouTube were prominent culprits. Twitter and others have also been slammed in the aftermath.

Artificial Intelligence technology failed – it was originally targeted at money making, and has not adapted to protections for harmful content.

RNZ –  French Muslim group sues Facebook, YouTube over Christchurch mosque shooting footage

One of the main groups representing Muslims in France said on Monday (local time) it was suing Facebook and YouTube, accusing them of inciting violence by allowing the streaming of footage of the Christchurch massacre on their platforms.

The French Council of the Muslim Faith (CFCM) said the companies had disseminated material that encouraged terrorism and harmed the dignity of human beings. There was no immediate comment from either company.

Facebook said it raced to remove hundreds of thousands of copies.

But a few hours after the attack, footage could still be found on Facebook, Twitter and Alphabet Inc’s YouTube, as well as Facebook-owned Instagram and Whatsapp.

Abdallah Zekri, president of the CFCM’s Islamophobia monitoring unit, said the organisation had launched a formal legal complaint against Facebook and YouTube in France.

Paul Brislen (RNZ) – Christchurch mosque attacks: How to regulate social media

Calls for social media to be regulated have escalated following their failure to act decisively in the public interest during the terror attacks in Christchurch.

The cry has been growing ever louder over the past few years. We have seen Facebook refuse to attend UK parliamentary sessions to discuss its role in the Cambridge Analytica affair, watched its CEO testify but not exactly add any clarity to inquiries into Russian interference in the US election, and seen the company accused of failing to combat the spread of hate speech amid violence in Myanmar.

US representatives are now openly talking about how to break up the company and our own prime minister has suggested that if Facebook can’t find a way to manage itself, she will.

But how do we regulate companies that don’t have offices in New Zealand (aside from the odd sales department) and that base their rights and responsibilities on another country’s legal system?

And if we are going to regulate them, how do we do it in such a way as to avoid trampling on users’ civil rights but makes sure we never see a repeat of the events of 15 March?

It’s going to be very difficult, but it has to be tried.

Richard MacManus (Newsroom) – The AI failures of Facebook & YouTube

Over the past couple of years, Facebook CEO Mark Zuckerberg has regularly trumpeted Facebook’s prowess in AI technology – in particular as a content moderation tool. As for YouTube, it’s owned by probably the world’s most technologically advanced internet company: Google.

Yet neither company was able to stop the dissemination of an appalling terrorist video, despite both claiming to be market leaders in advanced artificial intelligence.

Why is this a big deal? Because the technology already exists to shut down terror content in real-time. This according to Kalev Leetaru, a Senior Fellow at the George Washington University Center for Cyber & Homeland Security.

“We have the tools today to automatically scan all of the livestreams across all of our social platforms as they are broadcast in real time,” Leetaru wrote last week. Further, he says, these tools “are exceptionally capable of flagging a video the moment a weapon or gunfire or violence appears, pausing it from public view and referring it for immediate human review”.

So the technology exists, yet Facebook has admitted its AI system failed. Facebook’s vice president of integrity, Guy Rosen, told Stuffthat “this particular video did not trigger our automatic detection systems.”

I have seen one reason for this is that the live stream of the killings was too similar to video of common killing games. But there is one key difference – Live.

According to Leetaru, this could also have been prevented by current content hashing and content matching technologies.

Content hashing basically means applying a digital signature to a piece of content. If another piece of content is substantially similar to the original, it can easily be flagged and deleted immediately. As Leetaru notes, this process has been successfully used for years to combat copyright infringement and child pornography.

The social platforms have “extraordinarily robust” content signature matching, says Leetaru, “able to flag even trace amounts of the original content buried under an avalanche of other material”.

But clearly, this approach either wasn’t used by Facebook or YouTube to prevent distribution of the Christchurch terrorist’s video, or if it was used it had an unacceptably high failure rate.

Leetaru’s own conclusion is damning for Facebook and YouTube.

“The problem is that these approaches have substantial computational cost associated with them and when used in conjunction with human review to ensure maximal coverage, would eat into the companies’ profits,” he says.

Profit versus protection. It appears that the social media companies need to be pushed more towards protection.

In the aftermath of this tragedy, I’ve also wondered if more could have been done to identify, monitor and shut down the terrorist’s social media presence – not to mention alert authorities – before he committed his monstrous crime.

There’s certainly a case to be made for big tech companies to work closely with government intelligence agencies, at least for the most obvious and extreme instances of people posting hate content.

In an email exchange, I asked Leetaru what he thinks of social platforms working more closely with governments on policing hate content.

“So, the interaction between social platforms and governments is a complex space,” he replied. “Governments already likely use court orders to compel the socials to provide them data on ordinary users and dissidents. And if socials work with one government to remove “terrorist” users, other governments are going to demand the same abilities, but they might define a “terrorist” as someone who criticizes the government or “threatens national stability” by publishing information that undermines the government – like corruption charges. So, socials are understandably loathe to work more closely with governments, though they do [already] work closely with many Western governments.”

But the problem is not just with ‘terrors users’. Many individuals contribute to stoking hate and intolerance.

Here in New Zealand, there have already been people arrested and tried in court under the Objectionable Publications Act for sharing the terrorist video.

The problem is, hundreds of other people shared the video using anonymous accounts on YouTube, Reddit and other platforms where a real name isn’t required. Could AI tech help identify these anonymous cowards, then ban them from social media and report them to police?

Again, I recognise there are significant privacy implications to unmasking anonymous accounts. But I think it’s worth at least having the discussion.

“In many countries, there are limits to what you can do,” said Leetaru when I asked him about this. “Here in the US, social platforms are private companies. They can remove the content, but there are few laws restricting sharing the content – so there’s not much that could be done against those individuals legally.”

He also warned against naming and shaming anonymous trolls.

“Name and shame is always dangerous, since IP addresses are rotated regularly by ISPs – meaning your IP today might be someone across town’s tomorrow. And bad actors often use VPNs or other means to conceal their activity, including using their neighbour’s wifi or a coffee shop.”

Challenging times.

Online media giants are being challenged to improve their control of terrorism, violence, personal attacks  and harassment.

Online minnows have a role to play. It’s going to take some thought and time on how to manage this.

This will be a work in progress for everyone who has a responsibility for the publication of material from individuals, many of them anonymous.

 

Leave a comment

6 Comments

  1. The Consultant

     /  26th March 2019

    Excellent news. If Facebook burned to the ground I could not be happier. Same with Twitter.

    However, aside from the challenges of regulating them, there’s also the problem that having established itself as such a huge player, Facebook at this stage might welcome regulations because they keep competitors and future challenges down. This sort of regulatory trap has been around for a century, starting with large US companies in meat processing, oil and other areas in the 1900’s. They actually lobbied for regulations – the right of course. Once embedded in the regulatory process they could rely on their influence to help them.

    I also wonder if that French Muslim group will be going after President Erdogan of Turkey?

    Reply
  2. Finbaar Rustle

     /  26th March 2019

    Still clutching at straws.
    If social media was the cause of Fridays 15th attacks we’d all be doing it.
    Angry males have been carrying out terror attacks for thousands of years.
    Gun law reform, profiling, homeland security, social media reform,
    impassioned speeches,vigils,calls for unity achieve what exactly?
    Probably nothing.
    But people want to feel safe again and to move on.
    So the plethora of promises and insurances that everything possible
    is being to done to prevent another event any time soon.
    In reality little can be done when millions of people have no
    personal security systems in place and freedom of movement
    is a treasured and expected part of life.
    And despite the 15th’s event most attacks will come
    from people you know.
    Be aware and cross our fingers might be the best we can do

    Reply
  3. Alan Wilkinson

     /  26th March 2019

    This technology if refined sufficiently to work will become a powerful tool for anyone in power to suppress any messages they don’t like. Be careful what you wish for as the solution may be far worse than the problem.

    Reply
    • Kitty Catkin

       /  26th March 2019

      Jacinda may be giving herself too much power here. She can’t shut down any of these things. Or regulate them; she will be left looking very silly if she’s not careful.

      I cannot see why they can’t have efficient trigger systems. If an ordinary site can bar words like ass and bitch (barring quotes from PG Wodehouse as well as the Bible, and not allowing ‘Spaniel bitch;), why on earth can’t something as sophisticated as FB do it for hate speech and urging terrorism ?

      ‘Live’ could be anything; Neil Diamond Live at the Greek would be banned if ‘live’ was, given that ‘live’ can mean a recording of a performance with an audience. But surely, surely a computer could be programmed to do a cross check. Live (beep) ….music….singing….PASS ON….Live (beep) shooting….STOP !

      Reply
    • Kitty Catkin

       /  26th March 2019

      We survived without FB etc before; we had emails (the inventor of that should be canonised) and I use FB so seldom that I wouldn’t miss it.

      Reply
  1. Social media AI big on revenue, hopeless on terroroism — Your NZ – NZ Conservative Coalition

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s