Sacha Baron Cohen on Mark Zuckerberg and Facebook

Facebook tightening livestreaming rules

Just prior to signing the Christchurch Call agreement in Paris Facebook announced that they are tightening rules on livestreaming.

Reuters: Facebook restricts Live feature, citing New Zealand shooting

Facebook Inc said on Tuesday it was tightening rules around its livestreaming feature ahead of a meeting of world leaders aimed at curbing online violence in the aftermath of a massacre in New Zealand.

Facebook said in a statement it was introducing a “one-strike” policy for use of Facebook Live, temporarily restricting access for people who have faced disciplinary action for breaking the company’s most serious rules anywhere on its site.

First-time offenders will be suspended from using Live for set periods of time, the company said. It is also broadening the range of offences that will qualify for one-strike suspensions.

Facebook did not specify which offences were eligible for the one-strike policy or how long suspensions would last, but a spokeswoman said it would not have been possible for the shooter to use Live on his account under the new rules.

The company said it plans to extend the restrictions to other areas over coming weeks, beginning with preventing the same people from creating ads on Facebook.

It also said it would fund research at three universities on techniques to detect manipulated media, which Facebook’s systems struggled to spot in the aftermath of the attack.

New Zealand Prime Minister Jacinda Ardern has responded:


Comment from Jacinda Ardern on Facebook livestreaming announcement

“Facebook’s decision to put limits on livestreaming is a good first step to restrict the application being used as a tool for terrorists and shows the Christchurch Call is being acted on,” Prime Minister Jacinda Ardern said.

“Today’s announcement addresses a key component of the Christchurch Call, a shared commitment to making livestreaming safer.

“The March 15 terrorist highlighted just how easily livestreaming can be misused for hate. Facebook has made a tangible first step to stop that act being repeated on their platform.

“Facebook’s announcement of new research into detecting manipulated media across images, video and audio in order to take it down is welcomed.

“Multiple edited and manipulated versions of the March 15 massacre quickly spread online, and the take down was slow as a result. New technology to prevent the easy spread of terrorist content will be a major contributor to making social media safer for users, and stopping the unintentional viewing of extremist content like so many people in New Zealand did after the attack, including myself, when it auto played in Facebook feeds.

“The Christchurch Call gets agreement from tech companies to take initiatives to end the spread of terrorist content online. There is a lot more work to do, but I am pleased Facebook has taken additional steps today alongside the Call and look forward to a long term collaboration to make social media safer by removing terrorist content from it.”

 

Attendance at Ardern and Macron’s social media summit in Paris

New Zealand prime Minister Jacinda Ardern is co-chairing a meeting with world leaders and the tech industry with French Prime Minister Emmanuel Macron in Paris on Thursday (NZ time), to build support for Ardern’s “Christchurch Call” – a pledge to try to stop violent extremist content from spreading online.

Ardern explained her aims in an op-ed in the NY Times – see Jacinda Ardern ‘opinion’ in NY Times.

There aren’t a lot of world leaders attending in Paris – short notice would have made it difficult for some – but enough to make it a worthwhile attempt to get things rolling. Actually too many leaders may have made it more difficult to get agreement

Stuff: Who is and isn’t coming to Jacinda Ardern’s Paris summit on social media

This week’s meeting is being co-chaired by French President Macron. France is hosting the G7 Digital Summit, which sits alongside the Christchurch Call meeting.

The pledge will be launched two months to the day after the terror attack in Christchurch, which the alleged killer livestreamed on Facebook.

She will be joined by UK Prime Minister Theresa May, Canadian Prime Minister Justin Trudeau, French President Emmanuel Macron, European Commission President Jean-Claude Juncker, Irish Taoiseach Leo Varadkar, Norwegian Prime Minister Erna Solberg, Senegal President Macky Sall, and King Abdullah II of Jordan.

Ardern said talks were “ongoing” with the United States, where most of these large firms are based, but it was clear President Donald Trump would not be making the trip.

Because of a quirk of tax law however, many of the companies have vast subsidiaries based in Ireland, who are sending a leader.

Facebook itself is sending head of global affairs, and former UK deputy prime minister, Nick Clegg.

Zuckerberg did travel to Paris to meet Macron on Friday, who he has an ongoing relationship with.

Ardern has engaged with both Zuckerberg and Sandberg following the attack. She told Stuff it would have been preferable for Zuckerberg to attend, but she was more interested in a concrete result than who attended.

“Would we have found it preferable to have Mark Zuckerberg there? Absolutely. However the most important point for me is a commitment from Facebook. I would absolutely trade having them sign up to this than anything around a presence at this event. It’s the action that is important to us.”

Twitter is the only tech company sending its chief executive, Jack Dorsey. Microsoft is sending President Brad Smith while Wikimedia is sending Wikipedia founder Jimmy Wales. Google is sending Senior Vice President for Global Affairs Kent Walker.

I expect that any of the tech companies would have to approve any commitments through their management so it’s unlikely the Christchurch Call summit in Paris will provide anything like a final solution to violent extremist content online, but it is a step in the right direction.

Trump versus Facebook

I think that Facebook has a right to choose who uses their platform.

The President can grizzle about who Facebook bans aas much as he likes, but hew shouldn’t be able to dictate to Facebook who they should allow to user their media platform.

Being manipulated on social media

A series from Smarter Every Day on how people, you included perhaps, are being manipulated on social media.

Manipulating the YouTube Algorithm – (Part 1/3)

Twitter Platform Manipulation – (Part 2/3)

People are Manipulating You on Facebook – (Part 3-3)

x

New Zealand trying to lead crackdown on social media

Without knowing any details I don’t know whether the be pleased or concerned about attempts by the New Zealand Government to lead a crackdown on social media.

It is too easy for people and organisations to spread false and damaging information via social media, but attempts to deal with this could easily lurch too far in limiting freedom of expression.

NZ Herald – Social media crackdown: How New Zealand is leading the global charge

Steps towards global regulation of social media companies to rein in harmful content looks likely, with the Government set to take a lead role in a global initiative, the Herald has learned.

The will of governments to work together to tackle the potentially harmful impacts of social media would have only grown stronger in the wake of the terror attacks in Sri Lanka, where Facebook and Instagram were temporarily shut down in that country to stop the spread of false news reports.

Following the Christchurch terror attack, Prime Minister Jacinda Ardern has been working towards a global co-ordinated response that would make the likes of Facebook, YouTube and Twitter more responsible for the content they host.

The social media companies should be held to account for what they enable, but it’s a very tricky thing to address without squashing rights and freedoms.

Currently multinational social media companies have to comply with New Zealand law, but they also have an out-clause – called the safe harbour provisions – that means they may not be legally liable for what users publish on their sites, though these were not used in relation to the livestream video of the massacre in Christchurch.

Other countries, including Australia, are taking a more hardline approach that puts more onus on these companies to block harmful content, but the Government has decided a global response would be more effective, given the companies’ global reach.

Facebook has faced a barrage of criticism for what many see as its failure to immediately take down the livestream and minimise its spread; Facebook removed 1.5 million videos of the attack within 24 hours.

They were too ineffective and too slow – that they took down one and a half million copies shows how quickly the video spread before action was taken.

Ardern has said this wasn’t good enough, saying shortly after the Christchurch terror attack: “We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published.”

Among those adding their voices to this sentiment were the bosses of Spark, Vodafone and 2degrees and the managers of five government-related funds, who all called on social media companies to do more to combat harmful content.

Privacy Commissioner John Edwards has also been scathing, calling Facebook “morally bankrupt” and saying it should take immediate action to make its services safe.

Netsafe chief executive Martin Cocker said that existing laws and protections were not enough to stop the online proliferation of the gunman’s video.

He doubted that changing any New Zealand laws would be effective, and echoed Ardern in saying that a global solution was ideal.

But it is generally much harder to get international agreement on restrictive laws, so a global solution may be very difficult to achieve. Actually there is never likely to be ‘a solution’, all they can do is make it harder for bad stuff to proliferate.

The UK is currently considering a white paper on online harms that proposes a “statutory duty of care” for online content hosts.

Rules would be set up and enforced by an independent regulator, which would demand illegal content to be blocked within “an expedient timeframe”. Failure to comply could lead to substantial fines or even shutting down the service.

The problem is an effective timeframe has to be just about instant.

In Australia a law was recently passed that requires hosting services to “remove abhorrent violent material expeditiously” or face up to three years’ jail or fines in the millions of dollars.

Germany also has a law that gives social media companies an hour to remove “manifestly unlawful” posts such as hate speech, or face a fine up to 50 million Euros.

And the European Union is considering regulations that would give social media platforms an hour to remove or disable online terrorist content.

In New Zealand multiple laws – including the Harmful Digital Communications Act, the Human Rights Act, and the Crimes Act – dictate what can and cannot be published on social media platforms.

While Ardern has ruled out a model such as Australia’s, changes to New Zealand law could still happen following the current review of hate speech.

Legally defining ‘hate speech’ wil be difficult enough, and applying laws governing speech will require decisions and judgements to be made by people. That could be very difficult to do effectively.

 

 

Social media AI big on revenue, hopeless on terroroism

There has been a lot of focus on the part played by social media in publicising the Christchurch terror attacks, in particular the live streaming and repeating uploading of the killer’s video of his massacre. Facebook and YouTube were prominent culprits. Twitter and others have also been slammed in the aftermath.

Artificial Intelligence technology failed – it was originally targeted at money making, and has not adapted to protections for harmful content.

RNZ –  French Muslim group sues Facebook, YouTube over Christchurch mosque shooting footage

One of the main groups representing Muslims in France said on Monday (local time) it was suing Facebook and YouTube, accusing them of inciting violence by allowing the streaming of footage of the Christchurch massacre on their platforms.

The French Council of the Muslim Faith (CFCM) said the companies had disseminated material that encouraged terrorism and harmed the dignity of human beings. There was no immediate comment from either company.

Facebook said it raced to remove hundreds of thousands of copies.

But a few hours after the attack, footage could still be found on Facebook, Twitter and Alphabet Inc’s YouTube, as well as Facebook-owned Instagram and Whatsapp.

Abdallah Zekri, president of the CFCM’s Islamophobia monitoring unit, said the organisation had launched a formal legal complaint against Facebook and YouTube in France.

Paul Brislen (RNZ) – Christchurch mosque attacks: How to regulate social media

Calls for social media to be regulated have escalated following their failure to act decisively in the public interest during the terror attacks in Christchurch.

The cry has been growing ever louder over the past few years. We have seen Facebook refuse to attend UK parliamentary sessions to discuss its role in the Cambridge Analytica affair, watched its CEO testify but not exactly add any clarity to inquiries into Russian interference in the US election, and seen the company accused of failing to combat the spread of hate speech amid violence in Myanmar.

US representatives are now openly talking about how to break up the company and our own prime minister has suggested that if Facebook can’t find a way to manage itself, she will.

But how do we regulate companies that don’t have offices in New Zealand (aside from the odd sales department) and that base their rights and responsibilities on another country’s legal system?

And if we are going to regulate them, how do we do it in such a way as to avoid trampling on users’ civil rights but makes sure we never see a repeat of the events of 15 March?

It’s going to be very difficult, but it has to be tried.

Richard MacManus (Newsroom) – The AI failures of Facebook & YouTube

Over the past couple of years, Facebook CEO Mark Zuckerberg has regularly trumpeted Facebook’s prowess in AI technology – in particular as a content moderation tool. As for YouTube, it’s owned by probably the world’s most technologically advanced internet company: Google.

Yet neither company was able to stop the dissemination of an appalling terrorist video, despite both claiming to be market leaders in advanced artificial intelligence.

Why is this a big deal? Because the technology already exists to shut down terror content in real-time. This according to Kalev Leetaru, a Senior Fellow at the George Washington University Center for Cyber & Homeland Security.

“We have the tools today to automatically scan all of the livestreams across all of our social platforms as they are broadcast in real time,” Leetaru wrote last week. Further, he says, these tools “are exceptionally capable of flagging a video the moment a weapon or gunfire or violence appears, pausing it from public view and referring it for immediate human review”.

So the technology exists, yet Facebook has admitted its AI system failed. Facebook’s vice president of integrity, Guy Rosen, told Stuffthat “this particular video did not trigger our automatic detection systems.”

I have seen one reason for this is that the live stream of the killings was too similar to video of common killing games. But there is one key difference – Live.

According to Leetaru, this could also have been prevented by current content hashing and content matching technologies.

Content hashing basically means applying a digital signature to a piece of content. If another piece of content is substantially similar to the original, it can easily be flagged and deleted immediately. As Leetaru notes, this process has been successfully used for years to combat copyright infringement and child pornography.

The social platforms have “extraordinarily robust” content signature matching, says Leetaru, “able to flag even trace amounts of the original content buried under an avalanche of other material”.

But clearly, this approach either wasn’t used by Facebook or YouTube to prevent distribution of the Christchurch terrorist’s video, or if it was used it had an unacceptably high failure rate.

Leetaru’s own conclusion is damning for Facebook and YouTube.

“The problem is that these approaches have substantial computational cost associated with them and when used in conjunction with human review to ensure maximal coverage, would eat into the companies’ profits,” he says.

Profit versus protection. It appears that the social media companies need to be pushed more towards protection.

In the aftermath of this tragedy, I’ve also wondered if more could have been done to identify, monitor and shut down the terrorist’s social media presence – not to mention alert authorities – before he committed his monstrous crime.

There’s certainly a case to be made for big tech companies to work closely with government intelligence agencies, at least for the most obvious and extreme instances of people posting hate content.

In an email exchange, I asked Leetaru what he thinks of social platforms working more closely with governments on policing hate content.

“So, the interaction between social platforms and governments is a complex space,” he replied. “Governments already likely use court orders to compel the socials to provide them data on ordinary users and dissidents. And if socials work with one government to remove “terrorist” users, other governments are going to demand the same abilities, but they might define a “terrorist” as someone who criticizes the government or “threatens national stability” by publishing information that undermines the government – like corruption charges. So, socials are understandably loathe to work more closely with governments, though they do [already] work closely with many Western governments.”

But the problem is not just with ‘terrors users’. Many individuals contribute to stoking hate and intolerance.

Here in New Zealand, there have already been people arrested and tried in court under the Objectionable Publications Act for sharing the terrorist video.

The problem is, hundreds of other people shared the video using anonymous accounts on YouTube, Reddit and other platforms where a real name isn’t required. Could AI tech help identify these anonymous cowards, then ban them from social media and report them to police?

Again, I recognise there are significant privacy implications to unmasking anonymous accounts. But I think it’s worth at least having the discussion.

“In many countries, there are limits to what you can do,” said Leetaru when I asked him about this. “Here in the US, social platforms are private companies. They can remove the content, but there are few laws restricting sharing the content – so there’s not much that could be done against those individuals legally.”

He also warned against naming and shaming anonymous trolls.

“Name and shame is always dangerous, since IP addresses are rotated regularly by ISPs – meaning your IP today might be someone across town’s tomorrow. And bad actors often use VPNs or other means to conceal their activity, including using their neighbour’s wifi or a coffee shop.”

Challenging times.

Online media giants are being challenged to improve their control of terrorism, violence, personal attacks  and harassment.

Online minnows have a role to play. It’s going to take some thought and time on how to manage this.

This will be a work in progress for everyone who has a responsibility for the publication of material from individuals, many of them anonymous.

 

Facebook, Google accused of inciting violence

It may be more allowing violence to be incited, but is there a difference?

The US Five Eyes/Huawei threat

It looks like the US is trying to play hardball on deterring Five Eyes allies from using Huawei technology. Is this foe security or economic reasons? Possibly both.

Who would you prefer to have a back door into your data, China or the US? Huawei denies allowing secret access, but we know US technology companies have helped their secret services.

Newsroom:  US delivers Five Eyes threat over Huawei

The United States has delivered the most explicit threat yet to New Zealand’s role in the Five Eyes alliance if it allows Huawei into the 5G network, saying it will not share information with any country which allows the Chinese company into “critical information systems”.

The remarks from US Secretary of State Mike Pompeo call into question claims from Kiwi politicians and officials that outside pressure is not behind a decision to block Huawei equipment from being used by Spark in its 5G network.

The decision, made by the Government Communications Security Bureau late last year, has sparked fears of retaliation from China against New Zealand including a report in the CCP-owned Global Times which suggested Chinese tourists were turning away from the country in protest.

In an interview with Fox Business News, Pompeo said the country had been speaking to other nations to ensure they understood the risk of putting Huawei technology into their infrastructure.

“We can’t forget these systems were designed with the express work alongside the Chinese PLA, their military in China, they are creating real risk for these countries and their systems, the security of their people…

“We’re out sharing this information, the knowledge that America has gained through its vast network and making sure countries understand the risk. That’s important – we think they’ll make good decisions when they understand that risk.”

Asked specifically about the risks posed to Americans’ information through alliances like Five Eyes if partners allowed Huawei into their systems, Pompeo said that would be an obstacle to any future relationships.

“If a country adopts this and puts it in some of their critical information systems, we won’t be able to share information with them, we won’t be able to work alongside them.”

Given New Zealand has remained a part of Five Eyes despite allowing Huawei into its 4G and ultra-fast broadband networks, it is unclear how real the threat is – although intelligence officials have acknowledged that 5G networks provide an added layer of risk.

But the secret services of countries are not the only risk to our privacy.

Be very afraid?

If an antacid advertisement pops up after you burp, or a laxative advertisement pops up after you fart, then it may be too late.

The Government may be able tax us on our measured emissions.

Facebook breaches privacy and trust again

Facebook can be a useful way of keeping in touch – I have been involved in a group that has brought wider family together online after little communication previously – but another revelation  of breach of privacy adds concerns about using Facebook.

Guardian: Is 2019 the year you should finally quit Facebook?

Prepare yourself for an overwhelming sense of deja vu: another Facebookprivacy “scandal” is upon us.

A New York Times investigation has found that Facebook gave Netflix, Spotify and the Royal Bank of Canada (RBC) the ability to read, write and delete users’ private messages. The Times investigation, based on hundreds of pages of internal Facebook documents, also found that Facebook gave 150 partners more access to user data than previously disclosed. Microsoft, Sony and Amazon, for example, could obtain the contact information of their users’ friends.

Netflix, Spotify and RBC have all denied doing anything nefarious with your private messages. Netflix tweeted that it never asked for the ability to look at them; Spotify says it had no idea it had that sort of access; RBC disputes it even had the ability to see users’ messages. Whether they accessed your information or not, however, is not the point. The point is that Facebook should never have given them this ability without getting your explicit permission to do so.

In a tone-deaf response to the Times investigation, the tech giant explained: “None of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC.” Perhaps not, but they did violate public trust.

This just reinforces warnings about use of anything online – treat it as if anything you say or post could be public.

One of the problems with Facebook is that it is difficult if not impossible to know what others see of what we post. We simply don’t know what Facebook shows or makes available to others, and they have shown time and again that they can’t be trusted.

Facebook (and other websites) give us a lot, but take a lot from us collectively, and put their own commercial interests first.

The Times’ new report caps off a very bad year for Facebook when it comes to public trust. Let’s just recap a few of the bigger stories, shall we?

  • March: The Observer reveals that Cambridge Analytica harvested the dataof millions of Facebook users without their consent for political purposes. It is also revealed that Facebook had been keeping records of Android users’ phone calls and texts.
  • April: It was revealed that Facebook was in secret talks with hospitals to get them to share patients’ private medical data.
  • September: Hackers gained access to around 30m Facebook accounts.
  • November: Facebook acknowledges it didn’t do enough to stop its platform being as a tool to incite genocidal violence in Myanmar. A New York Times report reveals the company hired a PR firm to try and discredit critics by claiming they were agents of George Soros.
  • December: Facebook admitted it exposed private photos from 68 million users to apps that weren’t authorized to view your photos. (You can check if you were affected via this Facebook link.)

If you’re still on Facebook after everything has happened this year, you need to ask yourself why. Is the value you get from the platform really worth giving up all your data for? More broadly, are you comfortable being part of the reason that Facebook is becoming so dangerously powerful?

In March, following the Cambridge Analytica scandal, Facebook put out print ads stating: “We have a responsibility to protect your information. If we can’t, we don’t deserve it.” I think they’ve proved by now that they don’t deserve it. Time and time again Facebook has made it abundantly clear that it is a morally bankrupt company that is never going to change unless it is forced to.

What’s more, Facebook has made it very clear that it thinks it can get away with anything because its users are idiots. Zuckerberg famously called the first Facebook users “dumb fucks” for handing their personal information over to him; his disdain for the people whose data he deals with doesn’t appear to have lessened over time.

I will keep using Facebook for what suits me, but I will continue to give them little in current or personal information. And I will continue to ignore advertising.