Being manipulated on social media

A series from Smarter Every Day on how people, you included perhaps, are being manipulated on social media.

Manipulating the YouTube Algorithm – (Part 1/3)

Twitter Platform Manipulation – (Part 2/3)

People are Manipulating You on Facebook – (Part 3-3)

x

Social media AI big on revenue, hopeless on terroroism

There has been a lot of focus on the part played by social media in publicising the Christchurch terror attacks, in particular the live streaming and repeating uploading of the killer’s video of his massacre. Facebook and YouTube were prominent culprits. Twitter and others have also been slammed in the aftermath.

Artificial Intelligence technology failed – it was originally targeted at money making, and has not adapted to protections for harmful content.

RNZ –  French Muslim group sues Facebook, YouTube over Christchurch mosque shooting footage

One of the main groups representing Muslims in France said on Monday (local time) it was suing Facebook and YouTube, accusing them of inciting violence by allowing the streaming of footage of the Christchurch massacre on their platforms.

The French Council of the Muslim Faith (CFCM) said the companies had disseminated material that encouraged terrorism and harmed the dignity of human beings. There was no immediate comment from either company.

Facebook said it raced to remove hundreds of thousands of copies.

But a few hours after the attack, footage could still be found on Facebook, Twitter and Alphabet Inc’s YouTube, as well as Facebook-owned Instagram and Whatsapp.

Abdallah Zekri, president of the CFCM’s Islamophobia monitoring unit, said the organisation had launched a formal legal complaint against Facebook and YouTube in France.

Paul Brislen (RNZ) – Christchurch mosque attacks: How to regulate social media

Calls for social media to be regulated have escalated following their failure to act decisively in the public interest during the terror attacks in Christchurch.

The cry has been growing ever louder over the past few years. We have seen Facebook refuse to attend UK parliamentary sessions to discuss its role in the Cambridge Analytica affair, watched its CEO testify but not exactly add any clarity to inquiries into Russian interference in the US election, and seen the company accused of failing to combat the spread of hate speech amid violence in Myanmar.

US representatives are now openly talking about how to break up the company and our own prime minister has suggested that if Facebook can’t find a way to manage itself, she will.

But how do we regulate companies that don’t have offices in New Zealand (aside from the odd sales department) and that base their rights and responsibilities on another country’s legal system?

And if we are going to regulate them, how do we do it in such a way as to avoid trampling on users’ civil rights but makes sure we never see a repeat of the events of 15 March?

It’s going to be very difficult, but it has to be tried.

Richard MacManus (Newsroom) – The AI failures of Facebook & YouTube

Over the past couple of years, Facebook CEO Mark Zuckerberg has regularly trumpeted Facebook’s prowess in AI technology – in particular as a content moderation tool. As for YouTube, it’s owned by probably the world’s most technologically advanced internet company: Google.

Yet neither company was able to stop the dissemination of an appalling terrorist video, despite both claiming to be market leaders in advanced artificial intelligence.

Why is this a big deal? Because the technology already exists to shut down terror content in real-time. This according to Kalev Leetaru, a Senior Fellow at the George Washington University Center for Cyber & Homeland Security.

“We have the tools today to automatically scan all of the livestreams across all of our social platforms as they are broadcast in real time,” Leetaru wrote last week. Further, he says, these tools “are exceptionally capable of flagging a video the moment a weapon or gunfire or violence appears, pausing it from public view and referring it for immediate human review”.

So the technology exists, yet Facebook has admitted its AI system failed. Facebook’s vice president of integrity, Guy Rosen, told Stuffthat “this particular video did not trigger our automatic detection systems.”

I have seen one reason for this is that the live stream of the killings was too similar to video of common killing games. But there is one key difference – Live.

According to Leetaru, this could also have been prevented by current content hashing and content matching technologies.

Content hashing basically means applying a digital signature to a piece of content. If another piece of content is substantially similar to the original, it can easily be flagged and deleted immediately. As Leetaru notes, this process has been successfully used for years to combat copyright infringement and child pornography.

The social platforms have “extraordinarily robust” content signature matching, says Leetaru, “able to flag even trace amounts of the original content buried under an avalanche of other material”.

But clearly, this approach either wasn’t used by Facebook or YouTube to prevent distribution of the Christchurch terrorist’s video, or if it was used it had an unacceptably high failure rate.

Leetaru’s own conclusion is damning for Facebook and YouTube.

“The problem is that these approaches have substantial computational cost associated with them and when used in conjunction with human review to ensure maximal coverage, would eat into the companies’ profits,” he says.

Profit versus protection. It appears that the social media companies need to be pushed more towards protection.

In the aftermath of this tragedy, I’ve also wondered if more could have been done to identify, monitor and shut down the terrorist’s social media presence – not to mention alert authorities – before he committed his monstrous crime.

There’s certainly a case to be made for big tech companies to work closely with government intelligence agencies, at least for the most obvious and extreme instances of people posting hate content.

In an email exchange, I asked Leetaru what he thinks of social platforms working more closely with governments on policing hate content.

“So, the interaction between social platforms and governments is a complex space,” he replied. “Governments already likely use court orders to compel the socials to provide them data on ordinary users and dissidents. And if socials work with one government to remove “terrorist” users, other governments are going to demand the same abilities, but they might define a “terrorist” as someone who criticizes the government or “threatens national stability” by publishing information that undermines the government – like corruption charges. So, socials are understandably loathe to work more closely with governments, though they do [already] work closely with many Western governments.”

But the problem is not just with ‘terrors users’. Many individuals contribute to stoking hate and intolerance.

Here in New Zealand, there have already been people arrested and tried in court under the Objectionable Publications Act for sharing the terrorist video.

The problem is, hundreds of other people shared the video using anonymous accounts on YouTube, Reddit and other platforms where a real name isn’t required. Could AI tech help identify these anonymous cowards, then ban them from social media and report them to police?

Again, I recognise there are significant privacy implications to unmasking anonymous accounts. But I think it’s worth at least having the discussion.

“In many countries, there are limits to what you can do,” said Leetaru when I asked him about this. “Here in the US, social platforms are private companies. They can remove the content, but there are few laws restricting sharing the content – so there’s not much that could be done against those individuals legally.”

He also warned against naming and shaming anonymous trolls.

“Name and shame is always dangerous, since IP addresses are rotated regularly by ISPs – meaning your IP today might be someone across town’s tomorrow. And bad actors often use VPNs or other means to conceal their activity, including using their neighbour’s wifi or a coffee shop.”

Challenging times.

Online media giants are being challenged to improve their control of terrorism, violence, personal attacks  and harassment.

Online minnows have a role to play. It’s going to take some thought and time on how to manage this.

This will be a work in progress for everyone who has a responsibility for the publication of material from individuals, many of them anonymous.