British parliament releases contentious Facebook emails

British parliament releases contentious Facebook emails by Mathew Ingram:


When a British parliamentary committee looking into Facebook’s role in misinformation and data privacy seized documents last week from an American businessman involved in a lawsuit with Facebook, the committee threatened to make the files public, even though they were sealed by a California court order. And that’s exactly what it did on Wednesday: Damian Collins, the head of the committee–and the man who used a little-known British law to send a Serjeant-at-Arms to the American businessman’s hotel room to escort him to the House of Commons–published more than 200 pages of emails and other documents. The files came from a court case with Six4Three, makers of an app that allowed users to search their friends’ photos for bathing suit pictures. The details in the documents won’t come as a surprise to anyone who has been following Facebook and its various privacy blunders, but it is illuminating to see some of the company’s practices exposed in black and white.
One of the most contentious revelations revolves around a proposal to update the Facebook app for Android phones so that the social network could read and store the call logs of users. It would then use the data from a user’s call history, as well as their text messages, to tweak the News Feed algorithm and other features (including the “people you might know” feature, which recommends other users to friend on the network). An email from a senior Facebook staffer admits this is “a pretty high-risk thing to do from a PR perspective, but it appears that the growth team will charge ahead and do it.” A subsequent email says the team has figured out that if the app only wants access to the call logs, it could offer a simple “click to upgrade” option without having to get users to give their permission through a special dialog box. Ashkan Soltani, former chief technology officer for the Federal Trade Commission, pointed out that this kind of behavior may be a breach of the “consent decree” that Facebook signed with the FTC in 2011, in which it agreed not to engage in certain kinds of behavior.
From the British committee’s viewpoint, one of the more interesting email chains has to do with Facebook’s data policies; the committee is investigating the company’s behavior in the Cambridge Analytica scandal, in which the company wrongfully acquired personal data on more than 50 million users that they provided by signing up for a personality quiz app. Facebook has said repeatedly that access to this kind of data was closed off in 2015, but the emails and other documents make it clear that for certain “whitelisted” companies, access to that data continued (as _The Wall Street Journal_ has reported). The committee’s preamble to the documents continues: “It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted.”
In another document, Facebook outlines the restrictions it places on certain companies when it comes to accessing Facebook data. “We maintain a small list of strategic competitors that Mark personally reviewed,” the document states. “Any usage beyond that specified is not permitted without Mark level sign-off.” In the case of certain competitors, especially ones that competed with Facebook’s pet features (like video), Facebook would terminate virtually all access to user data. It did this in the case of Twitter’s short-lived Vine video app, for example: in an email to Zuckerberg in 2013, a Facebook product manager says Vine (which had just launched that same day) allowed users to find friends by using the Facebook API. He suggested shutting down Twitter’s access to this data immediately, and Zuckerberg responded: “Yup, go for it.”
In a response to the documents’ publication, Zuckerberg pointed out that in the time leading up to the changes to its platform in 2015, the company was driven primarily by a desire to connect people in as many different ways as possible, until it discovered that developers were building “shady apps that abused people’s data.” Without naming the bikini app company, the Facebook CEO says some of the developers whose apps were kicked off the platform sued in an attempt to reverse the change, “but we’re confident this was the right thing to do and that we’ll win these lawsuits.” Whether the published emails will also provide more ammunition for those looking to regulate the social network remains to be seen.

Obviously things progressed since this news came out. It should cause users to, yet again, reflect on their use of Facebook’s platforms.

Updates:

  • [Internal Documents Show Facebook Has Never Deserved Our Trust or Our Data – Motherboard](https://motherboard.vice.com/en_us/article/7xyenz/internal-documents-show-facebook-has-never-deserved-our-trust-or-our-data)
  • [Facebook Fined $11.3M for Privacy Violations | Threatpost | The first stop for security news](https://threatpost.com/facebook-fined-privacy/139824/)
Also on:

Resisting Law Enforcement’s Siren Song: A Call for Cryptographers to Improve Trust and Security

Creating systems of trust and real security for users should be all hands on deck, from government to the private sector. We need to encrypt the web, secure data at rest and in transit, and ensure that homes, cars and anything that can be connected to the internet are safe and trustworthy. The array of options is poor since security architects have to bolt security onto insecure systems. But that’s all the more reason to encourage people who understand how computer security works (and how it fails) to help. After all, there are only so many hours in the day, and the more attention we pay to these problems, the faster and better we can address them.

It’s not just individuals and private institutions who should be focusing on improving security for users, of course. Governments should be shouldering their responsibility for public safety by leading, incentivizing and, in places, even legally mandating real digital security for their increasingly vulnerable citizens.

But they are not. While the U.S. government has pushed hard to make sure that companies give them information about security problems—in the Department of Homeland Security’s Information Sharing and Analysis Centers and in the Cybersecurity Information Sharing Act passed in 2015, for example—there has been very little information or tools coming back to protect the public as technology users. This is even as we’re pushed into a world that increasingly relies on the internet for every facet of our daily lives. It’s also as the consequences of losing control of our data grow larger and more dire. Digital networks are now increasingly coming into our homes and cars. There are pushes to move to online voting, to the horror of security experts. The vast majority of us carry our phones with us everywhere; with them comes access to a tremendous amount of intimate information about us, our loved ones and our business and personal associations, both stored on the device and accessible through them.

The government should generate, incentivize and support efforts to build a more secure and trustworthy internet, along with the devices and services that rely on it. Instead, law enforcement in the U.S. and elsewhere too often demonize companies and individuals that offer strong security and pressure them to offer worse tools, not better ones.

Resisting Law Enforcement’s Siren Song: A Call for Cryptographers to Improve Trust and Security – Lawfare

Great piece, especially in light of the recent actions in Australia.

Also on:

Find out what Twitter and Facebook think you like

Find out what Twitter and Facebook think you like:

Facebook and Twitter don’t like to talk about how, exactly, their algorithms determine users’ interests. According to their privacy policies, both collect basic information you provide in your profile, like your birthday and gender, as well as details around your log-ins, like what devices you use and your location, and your posts and “likes.” Twitter and Facebook may also receive information from your browser cookies, what links you click, and third party apps that you’ve connected to your account. They might also be able to match additional info from their partners to you based on your phone number or email address.

Though the details of their algorithms aren’t clear, Facebook and Twitter are at least attempting to be somewhat transparent about the end result of those programs. Your Twitter and your Facebook ad settings allow you a glimpse into what social media companies (and the advertisers who pay them) think you’re into.

(Via Quartz)

Also on:

Encryption debate reminiscent of climate change arguments: Senetas

Encryption debate reminiscent of climate change arguments: Senetas:

Chair of Australian security vendor Senetas Francis Galbally has told the Parliamentary Joint Committee on Intelligence and Security (PJCIS) that the current debate surrounding the proposed encryption-busting Assistance and Access Bill is similar to the one surrounding climate change in Australia.

Despite being told over and over again by experts that accessing encrypted communications will introduce weaknesses into the system, committee members continued to press that a solution is possible.

“It’s a bit like the people denying climate change — all the scientists say there’s climate change, but you politicians don’t admit it,” Galbally said towards the end of the hearing on Friday morning. “It’s the same thing here.

“You cannot do it without creating a systemic weakness. There’s no definition of it, but we’ve had everyone around the world telling you the same thing.”

Galbally detailed how the company had conducted an assessment of the Bill at its own expense, and identified three “catastrophic outcomes” as certain or likely to occur if the Bill is passed.

“The Bill, should it become law, will profoundly undermine the reputations of Australian software developers and hardware manufacturers in international markets; there is simply no doubt that this will result in a significant reduction in local R&D and manufacturing as a consequence of declining employment and export revenue,” Galbally said.

“Foreign governments and competitors will use the mere existence of this legislation to claim that Australian cybersecurity products are required to use or collaborate in creating encryption backdoors.”

[Galbally] added that customers and global competitors are not interested in the nuances and exemptions that could possibly be added to the Bill, as the company will be undercut and lose business.

“In the cut and thrust of the sales world, the existence of such legislation is enough for us to lose a sale,” Galbally added.

“I can say confidently that Senetas will be directly affected, and with exports representing over 95 percent of our sales, there will be a substantial impact on our business, were we to remain in Australia.”

… Should the Bill proceed, Senetas said it could find itself, and up to 200 jobs, moving offshore to avoid perception issues.

… ”The Russians, for example, they haven’t even done it because they know to do it upsets other things far greater than what they are trying to do.

“You have a problem with insurgents in Syria, you don’t drop an atom bomb on those insurgents and see what happens, the consequences that happen to everybody else around. This is the equivalent of dropping an atom bomb to find some nefarious character.

“You will destroy, eventually, Australian’s own data protection — that’s what it is.”

(Via Latest Topic for ZDNet in security)

The battle in Australia over encryption and data protection makes my eyes roll every time I read about it. But the Deputy U.S. Attorney General has similar ideas to the Aussies:

“There is nothing virtuous about refusing to help develop responsible encryption, or in shaming people who understand the dangers of creating any spaces—whether real-world or virtual—where people are free to victimize others without fear of getting caught or punished,” Rosenstein said.

He is wrong. There are myriad virtues for privacy, for freedom and liberty, for capitalism, for trust in the economy, and a bunch of other things. Rosenstein wants to manage to the exception — basically treating edge cases (criminality) as the norm — instead of manage by exception.

“Responsible encryption,” as the Deputy U.S, A.G. defines it, is weak encryption … at best.

Back to Australia, here’s a nice bit of hand waving and false equivalency (&| false analogy) from the Committee chair:

Towards the end of the hearing, PJCIS chair Andrew Hastie justified the encryption-busting legislation due to the amount of methamphetamine use in his electorate.

“We use more ice in regional WA than in Sydney or Melbourne, so my point is from an economic perspective, we have a serious problem in this country with ice, and of course, my electorate has a large meth problem,” he said.

“I’ll just put on the record, different perspectives on this question.”

It’s not different perspectives. They are not related … except by exception.

Baby, meet bathwater.

Also on:

How Surveillance Inhibits Freedom of Expression

How Surveillance Inhibits Freedom of Expression:

Privacy encourages social progress by giving the few room to experiment free from the watchful eye of the many. Even if you are not personally chilled by ubiquitous surveillance, the society you live in is, and the personal costs are unequivocal.

(Via Schneier on Security)

Take a few minutes and give the whole piece a good read.

Also on:

Talk About Burying the Privacy Lede

Apple’s T2 security chip disconnects a MacBook’s microphone when users close the lid

Feature only available for MacBook Pro and MacBook Air models released in 2018.

Apple revealed today that all new notebooks that come with a built-in T2 security chip will now disconnect the built-in microphone at the hardware level when users close their devices’ lids. This new security enhancement is meant to prevent malware from secretly recording users.

(Via ZDNet)

I like this use of the T2 chip. If we can just get a hardware camera disconnect option and the ability to trigger both regardless of lid state and beyond some laptops, that would be delightful.

Apple seems to miss the point, though:

While Apple hasn’t explicitly spelled it out in the October version of the T2 chip white paper, the new feature was likely added to prevent malware or intrusive apps from secretly recording users when they close their lids, and when users normally believe the device and OS are in a suspended state.

Once the lid is open, the idea is that users will either rely on an antivirus capable of detecting running malware or apps like OverSight that alert the user when a Mac process –legitimate or not– tries to access a device’s microphone or camera.

Apple also said it didn’t configure the T2 chip to disconnect the camera at the hardware level similar to the microphone because it was pointless as the camera’s field of view is obstructed anyway when the user closes the lid.

Also on:

Privacy Badger Now Fights More Sneaky Google Tracking | Electronic Frontier Foundation

With its latest update, Privacy Badger now fights “link tracking” in a number of Google products.

Link tracking allows a company to follow you whenever you click on a link to leave its website. Earlier this year, EFF rolled out a Privacy Badger update targeting Facebook’s use of this practice. As it turns out, Google performs the same style of tracking, both in web search and, more concerning, in spaces for private conversation like Hangouts and comments on Google Docs. From now on, Privacy Badger will protect you from Google’s use of link tracking in all of these domains.

— Read on www.eff.org/deeplinks/2018/10/privacy-badger-now-fights-more-sneaky-google-tracking

Yikes!

More reason to move off of Google properties and when you have no choice but use them, protect yourself.

Also on:

ICYMI: Facebook Is Allowing Ad Targeting Based on Contact Information You Have No Control Over

Facebook Is Allowing Ad Targeting Based on Contact Information You Have No Control Over:

Even for Facebook’s low standards, this is exceptionally unethical: you haven’t given them permission to use this information; someone you know or someone you purchased products from has done that for you, probably with consent buried in an opaque privacy policy. There’s no way to opt out. And there are few-to-no regulations governing this.

(Via Pixel Envy)

This is a disaster from the security perspective. Users should enable 2FA to protect themselves with an expectation that this data is restricted for only this use.

Also on:

Mobile Websites Can Tap Into Your Phone’s Sensors Without Asking

Mobile Websites Can Tap Into Your Phone’s Sensors Without Asking:

When an app wants to access data from your smartphone’s motion or light sensors, iOS and Android require them to get your permission first. That keeps a fitness app, say, from counting your steps without your knowledge. But a team of researchers has discovered that those rules don’t apply to websites loaded in mobile browsers, which can often often access an array of device sensors without any notifications or permissions whatsoever.

That mobile browsers offer developers access to sensors isn’t necessarily problematic on its own. It’s what helps those services automatically adjust their layout, for example, when you switch your phone’s orientation. And the World Wide Web Consortium standards body has codified how web applications can access sensor data. But the researchers—Anupam Das of North Carolina State University, Gunes Acar of Princeton University, Nikita Borisov of the University of Illinois at Urbana-Champaign, and Amogh Pradeep of Northeastern University—found that the standards allow for unfettered access to certain sensors. And sites are using it.

(Via Security Latest)

Clearly this is a gap in vendor protection and user informed consent. When paired with the amount of bandwidth and other resources consumed by scripts, trackers, ads and the like, this news reinforces my opinion on ad-blockers that also deal with javascript.

Before we all panic, please note that the study only found 3.7% of the top 100,000 sites make use of this. And bear the following in mind:

That unapproved access to motion, orientation, proximity, or light sensor data alone probably wouldn’t compromise a user’s identity or device. And a web page can only access sensors as long as a user is actively browsing the page, not in the background.

Regardless, there is clearly an attack surface here that will be exploited. I can imagine something targeted using watering hole attacks being particularly successful.

“There’s a difference between the access from the web scripts compared to say mobile apps,” Acar says. “And a lot of this is legitimate. But the fact that access can be granted without prompting the user is surprising. It’s currently up to the vendors, and vendors tend to choose the side of more usability.”

Also on:

Privacy Shield on Shaky Ground: What’s Up with EU-U.S. Data Privacy Regulations

Privacy Shield on Shaky Ground: What’s Up with EU-U.S. Data Privacy Regulations:

There’s a lot going on in the privacy and data protection world. But one of the most pressing issues is the uncertain fate of Privacy Shield, the framework governing the flow of data between the EU and the U.S. for commercial purposes.

The Trump Administration has been given an ultimatum: comply with Privacy Shield, or risk a complete suspension of the EU-U.S. data sharing agreement. In a letter dated July 26, EU commissioner for justice Věra Jourová wagered to U.S. commerce secretary Wilbur Ross that suspension of the EU-U.S. Privacy Shield system would incentivize the U.S. to comply fully with the terms of the agreement. But Jourová’s urging that Ross “be smart and act” in appointing senior personnel to oversee the data sharing deal is hardly new. The July letter closely echoes a European Parliament (EP) resolution passed just three weeks earlier, and the European Commission (EC) voiced similar sentiments in its review of the Privacy Shield Framework last September. Further adding to the chorus of voices raising concerns about Privacy Shield compliance are tech and business groups, which jointly called for the nomination of a Privacy Shield ombudsperson in an Aug. 20 letter.

In addition to admonishing the EC’s failure to hold the U.S. accountable thus far, the EP resolution calls for a suspension of Privacy Shield if the U.S. has not fully complied by Sept. 1—though no such suspension has yet been announced. It also expresses serious concerns regarding the U.S.’s recent adoption of the Clarifying Lawful Overseas Use of Data (Cloud) Act and the legislation’s potential conflict with EU data protection laws. With the General Data Protection Regulation (GDPR)—the EU’s new regulatory regime for the protection of individual data—having come into effect on May 25, 2018, the EP considers the EC in contravention of GDPR Article 45(5). This article requires the EC to repeal, amend, or suspend an adequacy decision to the extent necessary once a third country no longer ensures an adequate level of data protection— until the U.S. authorities comply with its terms.

So what led to this ultimatum, and what’s next on the global data protection stage?    

(Via Lawfare – Hard National Security Choices)

The article gives a level set on Privacy Shield and then dives into specific areas. I highly recommend giving this a good read.

Also on: