New Zealand trying to lead crackdown on social media

Without knowing any details I don’t know whether the be pleased or concerned about attempts by the New Zealand Government to lead a crackdown on social media.

It is too easy for people and organisations to spread false and damaging information via social media, but attempts to deal with this could easily lurch too far in limiting freedom of expression.

NZ Herald – Social media crackdown: How New Zealand is leading the global charge

Steps towards global regulation of social media companies to rein in harmful content looks likely, with the Government set to take a lead role in a global initiative, the Herald has learned.

The will of governments to work together to tackle the potentially harmful impacts of social media would have only grown stronger in the wake of the terror attacks in Sri Lanka, where Facebook and Instagram were temporarily shut down in that country to stop the spread of false news reports.

Following the Christchurch terror attack, Prime Minister Jacinda Ardern has been working towards a global co-ordinated response that would make the likes of Facebook, YouTube and Twitter more responsible for the content they host.

The social media companies should be held to account for what they enable, but it’s a very tricky thing to address without squashing rights and freedoms.

Currently multinational social media companies have to comply with New Zealand law, but they also have an out-clause – called the safe harbour provisions – that means they may not be legally liable for what users publish on their sites, though these were not used in relation to the livestream video of the massacre in Christchurch.

Other countries, including Australia, are taking a more hardline approach that puts more onus on these companies to block harmful content, but the Government has decided a global response would be more effective, given the companies’ global reach.

Facebook has faced a barrage of criticism for what many see as its failure to immediately take down the livestream and minimise its spread; Facebook removed 1.5 million videos of the attack within 24 hours.

They were too ineffective and too slow – that they took down one and a half million copies shows how quickly the video spread before action was taken.

Ardern has said this wasn’t good enough, saying shortly after the Christchurch terror attack: “We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published.”

Among those adding their voices to this sentiment were the bosses of Spark, Vodafone and 2degrees and the managers of five government-related funds, who all called on social media companies to do more to combat harmful content.

Privacy Commissioner John Edwards has also been scathing, calling Facebook “morally bankrupt” and saying it should take immediate action to make its services safe.

Netsafe chief executive Martin Cocker said that existing laws and protections were not enough to stop the online proliferation of the gunman’s video.

He doubted that changing any New Zealand laws would be effective, and echoed Ardern in saying that a global solution was ideal.

But it is generally much harder to get international agreement on restrictive laws, so a global solution may be very difficult to achieve. Actually there is never likely to be ‘a solution’, all they can do is make it harder for bad stuff to proliferate.

The UK is currently considering a white paper on online harms that proposes a “statutory duty of care” for online content hosts.

Rules would be set up and enforced by an independent regulator, which would demand illegal content to be blocked within “an expedient timeframe”. Failure to comply could lead to substantial fines or even shutting down the service.

The problem is an effective timeframe has to be just about instant.

In Australia a law was recently passed that requires hosting services to “remove abhorrent violent material expeditiously” or face up to three years’ jail or fines in the millions of dollars.

Germany also has a law that gives social media companies an hour to remove “manifestly unlawful” posts such as hate speech, or face a fine up to 50 million Euros.

And the European Union is considering regulations that would give social media platforms an hour to remove or disable online terrorist content.

In New Zealand multiple laws – including the Harmful Digital Communications Act, the Human Rights Act, and the Crimes Act – dictate what can and cannot be published on social media platforms.

While Ardern has ruled out a model such as Australia’s, changes to New Zealand law could still happen following the current review of hate speech.

Legally defining ‘hate speech’ wil be difficult enough, and applying laws governing speech will require decisions and judgements to be made by people. That could be very difficult to do effectively.

 

 

Social media AI big on revenue, hopeless on terroroism

There has been a lot of focus on the part played by social media in publicising the Christchurch terror attacks, in particular the live streaming and repeating uploading of the killer’s video of his massacre. Facebook and YouTube were prominent culprits. Twitter and others have also been slammed in the aftermath.

Artificial Intelligence technology failed – it was originally targeted at money making, and has not adapted to protections for harmful content.

RNZ –  French Muslim group sues Facebook, YouTube over Christchurch mosque shooting footage

One of the main groups representing Muslims in France said on Monday (local time) it was suing Facebook and YouTube, accusing them of inciting violence by allowing the streaming of footage of the Christchurch massacre on their platforms.

The French Council of the Muslim Faith (CFCM) said the companies had disseminated material that encouraged terrorism and harmed the dignity of human beings. There was no immediate comment from either company.

Facebook said it raced to remove hundreds of thousands of copies.

But a few hours after the attack, footage could still be found on Facebook, Twitter and Alphabet Inc’s YouTube, as well as Facebook-owned Instagram and Whatsapp.

Abdallah Zekri, president of the CFCM’s Islamophobia monitoring unit, said the organisation had launched a formal legal complaint against Facebook and YouTube in France.

Paul Brislen (RNZ) – Christchurch mosque attacks: How to regulate social media

Calls for social media to be regulated have escalated following their failure to act decisively in the public interest during the terror attacks in Christchurch.

The cry has been growing ever louder over the past few years. We have seen Facebook refuse to attend UK parliamentary sessions to discuss its role in the Cambridge Analytica affair, watched its CEO testify but not exactly add any clarity to inquiries into Russian interference in the US election, and seen the company accused of failing to combat the spread of hate speech amid violence in Myanmar.

US representatives are now openly talking about how to break up the company and our own prime minister has suggested that if Facebook can’t find a way to manage itself, she will.

But how do we regulate companies that don’t have offices in New Zealand (aside from the odd sales department) and that base their rights and responsibilities on another country’s legal system?

And if we are going to regulate them, how do we do it in such a way as to avoid trampling on users’ civil rights but makes sure we never see a repeat of the events of 15 March?

It’s going to be very difficult, but it has to be tried.

Richard MacManus (Newsroom) – The AI failures of Facebook & YouTube

Over the past couple of years, Facebook CEO Mark Zuckerberg has regularly trumpeted Facebook’s prowess in AI technology – in particular as a content moderation tool. As for YouTube, it’s owned by probably the world’s most technologically advanced internet company: Google.

Yet neither company was able to stop the dissemination of an appalling terrorist video, despite both claiming to be market leaders in advanced artificial intelligence.

Why is this a big deal? Because the technology already exists to shut down terror content in real-time. This according to Kalev Leetaru, a Senior Fellow at the George Washington University Center for Cyber & Homeland Security.

“We have the tools today to automatically scan all of the livestreams across all of our social platforms as they are broadcast in real time,” Leetaru wrote last week. Further, he says, these tools “are exceptionally capable of flagging a video the moment a weapon or gunfire or violence appears, pausing it from public view and referring it for immediate human review”.

So the technology exists, yet Facebook has admitted its AI system failed. Facebook’s vice president of integrity, Guy Rosen, told Stuffthat “this particular video did not trigger our automatic detection systems.”

I have seen one reason for this is that the live stream of the killings was too similar to video of common killing games. But there is one key difference – Live.

According to Leetaru, this could also have been prevented by current content hashing and content matching technologies.

Content hashing basically means applying a digital signature to a piece of content. If another piece of content is substantially similar to the original, it can easily be flagged and deleted immediately. As Leetaru notes, this process has been successfully used for years to combat copyright infringement and child pornography.

The social platforms have “extraordinarily robust” content signature matching, says Leetaru, “able to flag even trace amounts of the original content buried under an avalanche of other material”.

But clearly, this approach either wasn’t used by Facebook or YouTube to prevent distribution of the Christchurch terrorist’s video, or if it was used it had an unacceptably high failure rate.

Leetaru’s own conclusion is damning for Facebook and YouTube.

“The problem is that these approaches have substantial computational cost associated with them and when used in conjunction with human review to ensure maximal coverage, would eat into the companies’ profits,” he says.

Profit versus protection. It appears that the social media companies need to be pushed more towards protection.

In the aftermath of this tragedy, I’ve also wondered if more could have been done to identify, monitor and shut down the terrorist’s social media presence – not to mention alert authorities – before he committed his monstrous crime.

There’s certainly a case to be made for big tech companies to work closely with government intelligence agencies, at least for the most obvious and extreme instances of people posting hate content.

In an email exchange, I asked Leetaru what he thinks of social platforms working more closely with governments on policing hate content.

“So, the interaction between social platforms and governments is a complex space,” he replied. “Governments already likely use court orders to compel the socials to provide them data on ordinary users and dissidents. And if socials work with one government to remove “terrorist” users, other governments are going to demand the same abilities, but they might define a “terrorist” as someone who criticizes the government or “threatens national stability” by publishing information that undermines the government – like corruption charges. So, socials are understandably loathe to work more closely with governments, though they do [already] work closely with many Western governments.”

But the problem is not just with ‘terrors users’. Many individuals contribute to stoking hate and intolerance.

Here in New Zealand, there have already been people arrested and tried in court under the Objectionable Publications Act for sharing the terrorist video.

The problem is, hundreds of other people shared the video using anonymous accounts on YouTube, Reddit and other platforms where a real name isn’t required. Could AI tech help identify these anonymous cowards, then ban them from social media and report them to police?

Again, I recognise there are significant privacy implications to unmasking anonymous accounts. But I think it’s worth at least having the discussion.

“In many countries, there are limits to what you can do,” said Leetaru when I asked him about this. “Here in the US, social platforms are private companies. They can remove the content, but there are few laws restricting sharing the content – so there’s not much that could be done against those individuals legally.”

He also warned against naming and shaming anonymous trolls.

“Name and shame is always dangerous, since IP addresses are rotated regularly by ISPs – meaning your IP today might be someone across town’s tomorrow. And bad actors often use VPNs or other means to conceal their activity, including using their neighbour’s wifi or a coffee shop.”

Challenging times.

Online media giants are being challenged to improve their control of terrorism, violence, personal attacks  and harassment.

Online minnows have a role to play. It’s going to take some thought and time on how to manage this.

This will be a work in progress for everyone who has a responsibility for the publication of material from individuals, many of them anonymous.

 

Facebook, Google accused of inciting violence

It may be more allowing violence to be incited, but is there a difference?

The US Five Eyes/Huawei threat

It looks like the US is trying to play hardball on deterring Five Eyes allies from using Huawei technology. Is this foe security or economic reasons? Possibly both.

Who would you prefer to have a back door into your data, China or the US? Huawei denies allowing secret access, but we know US technology companies have helped their secret services.

Newsroom:  US delivers Five Eyes threat over Huawei

The United States has delivered the most explicit threat yet to New Zealand’s role in the Five Eyes alliance if it allows Huawei into the 5G network, saying it will not share information with any country which allows the Chinese company into “critical information systems”.

The remarks from US Secretary of State Mike Pompeo call into question claims from Kiwi politicians and officials that outside pressure is not behind a decision to block Huawei equipment from being used by Spark in its 5G network.

The decision, made by the Government Communications Security Bureau late last year, has sparked fears of retaliation from China against New Zealand including a report in the CCP-owned Global Times which suggested Chinese tourists were turning away from the country in protest.

In an interview with Fox Business News, Pompeo said the country had been speaking to other nations to ensure they understood the risk of putting Huawei technology into their infrastructure.

“We can’t forget these systems were designed with the express work alongside the Chinese PLA, their military in China, they are creating real risk for these countries and their systems, the security of their people…

“We’re out sharing this information, the knowledge that America has gained through its vast network and making sure countries understand the risk. That’s important – we think they’ll make good decisions when they understand that risk.”

Asked specifically about the risks posed to Americans’ information through alliances like Five Eyes if partners allowed Huawei into their systems, Pompeo said that would be an obstacle to any future relationships.

“If a country adopts this and puts it in some of their critical information systems, we won’t be able to share information with them, we won’t be able to work alongside them.”

Given New Zealand has remained a part of Five Eyes despite allowing Huawei into its 4G and ultra-fast broadband networks, it is unclear how real the threat is – although intelligence officials have acknowledged that 5G networks provide an added layer of risk.

But the secret services of countries are not the only risk to our privacy.

Be very afraid?

If an antacid advertisement pops up after you burp, or a laxative advertisement pops up after you fart, then it may be too late.

The Government may be able tax us on our measured emissions.

Facebook breaches privacy and trust again

Facebook can be a useful way of keeping in touch – I have been involved in a group that has brought wider family together online after little communication previously – but another revelation  of breach of privacy adds concerns about using Facebook.

Guardian: Is 2019 the year you should finally quit Facebook?

Prepare yourself for an overwhelming sense of deja vu: another Facebookprivacy “scandal” is upon us.

A New York Times investigation has found that Facebook gave Netflix, Spotify and the Royal Bank of Canada (RBC) the ability to read, write and delete users’ private messages. The Times investigation, based on hundreds of pages of internal Facebook documents, also found that Facebook gave 150 partners more access to user data than previously disclosed. Microsoft, Sony and Amazon, for example, could obtain the contact information of their users’ friends.

Netflix, Spotify and RBC have all denied doing anything nefarious with your private messages. Netflix tweeted that it never asked for the ability to look at them; Spotify says it had no idea it had that sort of access; RBC disputes it even had the ability to see users’ messages. Whether they accessed your information or not, however, is not the point. The point is that Facebook should never have given them this ability without getting your explicit permission to do so.

In a tone-deaf response to the Times investigation, the tech giant explained: “None of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC.” Perhaps not, but they did violate public trust.

This just reinforces warnings about use of anything online – treat it as if anything you say or post could be public.

One of the problems with Facebook is that it is difficult if not impossible to know what others see of what we post. We simply don’t know what Facebook shows or makes available to others, and they have shown time and again that they can’t be trusted.

Facebook (and other websites) give us a lot, but take a lot from us collectively, and put their own commercial interests first.

The Times’ new report caps off a very bad year for Facebook when it comes to public trust. Let’s just recap a few of the bigger stories, shall we?

  • March: The Observer reveals that Cambridge Analytica harvested the dataof millions of Facebook users without their consent for political purposes. It is also revealed that Facebook had been keeping records of Android users’ phone calls and texts.
  • April: It was revealed that Facebook was in secret talks with hospitals to get them to share patients’ private medical data.
  • September: Hackers gained access to around 30m Facebook accounts.
  • November: Facebook acknowledges it didn’t do enough to stop its platform being as a tool to incite genocidal violence in Myanmar. A New York Times report reveals the company hired a PR firm to try and discredit critics by claiming they were agents of George Soros.
  • December: Facebook admitted it exposed private photos from 68 million users to apps that weren’t authorized to view your photos. (You can check if you were affected via this Facebook link.)

If you’re still on Facebook after everything has happened this year, you need to ask yourself why. Is the value you get from the platform really worth giving up all your data for? More broadly, are you comfortable being part of the reason that Facebook is becoming so dangerously powerful?

In March, following the Cambridge Analytica scandal, Facebook put out print ads stating: “We have a responsibility to protect your information. If we can’t, we don’t deserve it.” I think they’ve proved by now that they don’t deserve it. Time and time again Facebook has made it abundantly clear that it is a morally bankrupt company that is never going to change unless it is forced to.

What’s more, Facebook has made it very clear that it thinks it can get away with anything because its users are idiots. Zuckerberg famously called the first Facebook users “dumb fucks” for handing their personal information over to him; his disdain for the people whose data he deals with doesn’t appear to have lessened over time.

I will keep using Facebook for what suits me, but I will continue to give them little in current or personal information. And I will continue to ignore advertising.

 

The Facebook fiasco

NY Times – Delay, Deny and Deflect: How Facebook’s Leaders Fought Through Crisis

Facebook has gone on the attack as one scandal after another — Russian meddling, data sharing, hate speech — has led to a congressional and consumer backlash.

This account of how Mr. Zuckerberg and Ms. Sandberg navigated Facebook’s cascading crises, much of which has not been previously reported, is based on interviews with more than 50 people. They include current and former Facebook executives and other employees, lawmakers and government officials, lobbyists and congressional staff members. Most spoke on the condition of anonymity because they had signed confidentiality agreements, were not authorized to speak to reporters or feared retaliation.

Facebook declined to make Mr. Zuckerberg and Ms. Sandberg available for comment. In a statement, a spokesman acknowledged that Facebook had been slow to address its challenges but had since made progress fixing the platform.

A handful of US tech companies have radicalised the world

There is no doubt that the Internet has dramatically changed how media and politics operate. Over the last few years a few US companies have dominated radically changed how democracy is done, including allowing nefarious interference in election campaigns.

And at the same time there have been a number of political swings to more controversial and extreme leaders and parties.

Broderick (via twitter):

In the last 4 years, I’ve been to 22 countries, 6 continents, and been on the ground for close to a dozen referendums and elections. Three things are now very clear to:

1) A handful of American companies, Facebook and Google more than any other, have altered the fundamental nature of almost every major democracy on Earth. In most of these elections, far-right populism has made huge strides.

2) The misinformation, abuse, and radicalization created by these companies seems to affect poorer people and countries more heavily.

These companies replace local community networks, local media, local political networks and create easily exploitable, unmoderated news ones.

3) It is going to get worse and more connected. It is getting more mobile. It is having more physical real-world effects. Apps like WhatsApp and Instagram are even harder to track than Facebook.

It’s been a decade since I first felt like something was changing about the way we interact with the internet. In 2010, as a young news intern for a now-defunct website called the Awl, one of the first pieces I ever pitched was an explainer about why 4chan trolls were trying to take the also now-defunct website Gawker off the internet via a distributed denial of service (DDOS) attack. It was a world I knew. I was a 19-year-old who spent most of my time doing what we now recognize as “shitposting.” It was the beginning of an era where our old ideas about information, privacy, politics, and culture were beginning to warp.

I’ve followed that dark evolution of internet culture ever since. I’ve had the privilege — or deeply strange curse — to chase the growth of global political warfare around the world. In the last four years, I’ve been to 22 countries, six continents, and been on the ground for close to a dozen referendums and elections. I was in London for UK’s nervous breakdown over Brexit, in Barcelona for Catalonia’s failed attempts at a secession from Spain, in Sweden as neo-Nazis tried to march on the country’s largest book fair. And now, I’m in Brazil. But this era of being surprised at what the internet can and will do to us is ending. The damage is done. I’m trying to come to terms with the fact that I’ll probably spend the rest of my career covering the consequences.

There are certainly signs of major consequences internationally.

In New Zealand we have had political change, but after a nine year National government it wasn’t a big deal, especially as Labour (and NZ First) are not dramatically different to National in most significant policies. It was more of a tweak than upheaval here, probably.

But we can’t help but be affected by what happens in the rest of the increasingly radicalised world.

To be sure, populism, nationalism, and information warfare existed long before the internet. The arc of history doesn’t always bend toward what I think of as progress. Societies regress. The difference now is that all of this is being hosted almost entirely by a handful of corporations.

Why is an American company like Facebook placing ads in newspapers in countries like IndiaItalyMexico, and Brazil, explaining to local internet users how to look out for abuse and misinformation? Because our lives, societies, and governments have been tied to invisible feedback loops, online and off. And there’s no clear way to untangle ourselves.

The worst part of all of this is that, in retrospect, there’s no real big secret about how we got here.

The social media Fordlândias happening all over the world right now probably won’t last. The damage they cause probably will. The democracies they destabilize, the people they radicalize, and the violence they inspire will most likely have a long tail. Hopefully, though, it won’t take us a hundred years to try to actually rebuild functioning societies after the corporations have moved on.

Perhaps. It is very difficult to know where social media, democracy and the world will go to from here.

Mental health of online moderators

An ODT article today doesn’t seem to be online, but it refers to this: We need to talk about the mental health of content moderators

Selena Scola worked as a public content contractor, or content moderator, for Facebook in its Silicon Valley offices. She left the company in March after less than a year.

In documents filed last week in California, Scola alleges unsafe work practices led her to develop post-traumatic stress disorder (PTSD) from witnessing “thousands of acts of extreme and graphic violence”.

Facebook acknowledged the work of moderation is not easy in a blog post published in July. In the same post, Facebook’s Vice President of Operations Ellen Silver outlined some of the ways the company supports their moderators:

All content reviewers — whether full-time employees, contractors, or those employed by partner companies — have access to mental health resources, including trained professionals onsite for both individual and group counselling.

But Scola claims Facebook fails to practice what it preaches. Previous reports about its workplace conditions also suggest the support they provide to moderators isn’t enough.

How moderating can affect your mental health

Facebook moderators sift through hundreds of examples of distressing content during each eight hour shift.

They assess posts including, but not limited to, depictions of violent death – including suicide and murder – self-harm, assault, violence against animals, hate speech and sexualised violence.

Studies in areas such as child protectionjournalism and law enforcement show repeated exposure to these types of content has serious consequences. That includes the development of PTSD. Workers also experience higher rates of burnout, relationship breakdown and, in some instances, suicide.

This is a modern problem that an increasing number of people are exposed to. The Internet has made a huge amount of information readily available to most of the world, but unfortunately a lot of material reflects the worst of the world, and the worst of human nature.

We also need to address the ongoing issue of precarity in an industry that asks people to put their mental health at risk on a daily basis. This requires good industry governance and representation. To this end, Australian Community Managers have recently partnered with the MEAA to push for better conditions for everyone in the industry, including moderators.

As for Facebook, Scola’s suit is a class action. If it’s successful, Facebook could find itself compensating hundreds of moderators employed in California over the past three years. It could also set an industry-wide precedent, opening the door to complaints from thousands of moderators employed across a range of tech and media industries.

Rapidly changing use of technology means that solutions to problems introduced by the technology will struggle to keep up.

Note that I am one online moderator who has no concerns about the exposure I get and have to deal with. The problems here are very minor in comparison to some parts of the Internet, and I am not reliant on this for earning a living so it is choice rather than necessity that I continue to the relatively trivial moderation concerns here.

 

US democratic dysfunction continues

Facebook says it has identified further attempts to use social media to interfere with US elections, while Robert Mueller has referred three investigations into possible illicit foreign lobbying by Washington insiders to federal prosecutors in New York – as this involves people associated with Democrats as well as Republicans President Trump should at least be partially supportive of legally confronting the swamp.

NY Times: Facebook Identifies an Active Political Influence Campaign Using Fake Accounts

Facebook said on Tuesday that it had identified a political influence campaign that was potentially built to disrupt the midterm elections, with the company detecting and removing 32 pages and fake accounts that had engaged in activity around divisive social issues.

The company did not definitively link the campaign to Russia. But Facebook officials said some of the tools and techniques used by the accounts were similar to those used by the Internet Research Agency, the Kremlin-linked group that was at the center of an indictment this year alleging interference in the 2016 presidential election.

Facebook said it had discovered coordinated activity around issues like a sequel to last year’s deadly “Unite the Right” white supremacist rally in Charlottesville, Va. Activity was also detected around #AbolishICE, a left-wing campaign on social media that seeks to end the Immigration and Customs Enforcement agency.

The dream of the Internet enabling a revolution in ordinary people involvement in democracy has become an electoral nightmare in the US.

And we are not immune from it in New Zealand, but the greatest risk here is probably self inflicted wounds by ‘social justice warriors’ and political activists trying to impose their views and policies on everyone else, and trying to shut down speech they don’t like or they disagree with.

Also in the US, illicit foreign lobbying is in the spotlight with the trial of Paul Manafort under way – Manafort on trial: A scorched-earth prosecutor and not a mention of Trump

The nation’s inaugural look at special counsel Mueller’s team in action started with a bang. Assistant U.S. Attorney Uzo Asonye, brought onto the special counsel’s staff from the Alexandria federal prosecutor’s office for this case, faced the jury and declared: “A man in this courtroom believed the law did not apply to him.”

With more than a dozen of his colleagues from the federal investigation alongside and behind him, Asonye recovered quickly, keeping jurors riveted through a 26-minute opening statement that portrayed Manafort as someone who lied about his taxes, his income, his business, and a litany of other topics.

Only once, toward the end of the first day, did anyone mention the words “special counsel.” Zehnle said it, casually, in passing, with no reference to Trump or Russia or any of the political firestorm that has dominated the news for all of this presidency.

Yet the reason the courtroom was packed, the reason an overflow courtroom three stories below was also full, the reason the lawn in front of the building was given over to TV crews in their ritual encampment awaiting news, the reason for all of this was the cases yet to come, the deeper layers of the onion.

And three more lobbyists are also under investigation – Mueller Passes 3 Cases Focused on Illicit Foreign Lobbying to Prosecutors

Robert S. Mueller III, the special counsel, has referred three investigations into possible illicit foreign lobbying by Washington insiders to federal prosecutors in New York who are already handling the case against President Trump’s former lawyer, according to multiple people familiar with the cases.

The cases cut across party lines, focusing on both powerful Democratic and Republican players in Washington, including one whom Mr. Trump has repeatedly targeted — the Democratic superlobbyist Tony Podesta. The cases are unlikely to provoke an outburst from Mr. Trump similar to the one he unleashed in April after prosecutors raided the home and office of Michael D. Cohen, then the president’s lawyer. But these cases do represent a challenge to Washington’s elite, many of whom have earned rich paydays lobbying for foreign interests.

They also tie into the special counsel investigation of Mr. Trump: All three cases are linked to Paul Manafort, the president’s former campaign chairman, whose trial on financial fraud charges began Tuesday in Alexandria, Va.

Under American law, anyone who lobbies or conducts public relations on behalf of a foreign interest in the United States must register with the Justice Department. The law carries stiff penalties, including up to five years in prison. But it had rarely been enforced, and thus widely ignored, until recently.

Trump should be happy that the political swamp of Washington is at least under scrutiny, albeit a long way from being drained.

Image result for monster swamp washington

The jury is still out on whether Trump is going to monster the swamp, or if he is a monster of the swamp.

But it is obvious that dysfunction in US democracy is a long way from being rectified, if that is at all possible.

 

Zuckerberg apologises ahead of hearings, NZ data breaches

Mark Zuckerberg has apologised ahead of hearings in Congress over Facebook data breaches and possible effects on the 2016 US election. In the meantime it has been revealed that about 64,000 New Zealanders may have been involved in the data breaches.

More talk from Zuckerberg over ongoing Facebook data revelations, but  Congress will be looking for more than apologies in two days of hearings.

Reuters: CEO Zuckerberg says Facebook could have done more to prevent misuse

Facebook Inc Chief Executive Mark Zuckerberg told Congress on Monday that the social media network should have done more to prevent itself and its members’ data being misused and offered a broad apology to lawmakers.

“We didn’t take a broad enough view of our responsibility, and that was a big mistake,” he said in remarks released by the U.S. House Energy and Commerce Committee on Monday. “It was my mistake, and I’m sorry. I started Facebook, I run it, and I’m responsible for what happens here.”

“It’s clear now that we didn’t do enough to prevent these tools from being used for harm. That goes for fake news, foreign interference in elections, and hate speech, as well as developers and data privacy.”

His conciliatory tone precedes two days of Congressional hearings where Zuckerberg is set to answer questions about Facebook user data being improperly appropriated by a political consultancy and the role the network played in the U.S. 2016 election.

Top of the agenda in the forthcoming hearings will be Facebook’s admission that the personal information of up to 87 million users, mostly in the United States, may have been improperly shared with political consultancy Cambridge Analytica.

But lawmakers are also expected to press him on a range of issues, including the 2016 election.

Meanwhile:

Facebook, which has 2.1 billion monthly active users worldwide, said on Sunday it plans to begin on Monday telling users whose data may have been shared with Cambridge Analytica.

This potentially includes thousands of New Zealanders. RNZ:

Facebook today revealed it estimated nearly 64,000 New Zealanders were estimated to have had their data collected and used by Cambridge Analytica. The company is accused of using private data to personally target voters to manipulate elections.

A spokesperson for the social media giant said 87 million people were estimated to have been affected by the “Cambridge Analytica data misuse” worldwide, with more than 80 percent of those based in the US.

The data was apparently obtained via the “thisismydigitallife” personality test on Facebook and pulled in information about users’ friends liked without their explicit permission.

“For New Zealand, we estimate a total of 63,724 people may have been impacted – 10 are estimated to have downloaded the quiz app with 63,714 friends possibly impacted,” the company said.

The spokesperson said that from Tuesday the company would begin showing users which apps they connected to at the top of their Facebook feed, and an easy way to delete them.

“As part of this, we will let people know if their data might have been accessed by Cambridge Analytica,” the spokesperson said.

“We’re dramatically reducing the information people can share with apps. We’re shutting down other ways data was being shared through Groups, Events, Pages and Search.”

NetSafe chief executive Martin Cocker…

…said he did not think Facebook users needed to shut down their accounts following the revelation.

Mr Cocker said the breach was a reminder for Facebook users to take their privacy settings seriously, but not necessarily to quit the social media platform.

“Facebook has responded to this breach by setting up a series of tools and improving their management of apps and if anything the breach has lead to a safer Facebook in the future.”

There is nothing obviously different on my Facebook this morning.