NSA General Counsel Glenn Gerstell Remarks to Georgetown Cybersecurity Law Institute:

On Wednesday, NSA General Counsel Glenn Gerstell delivered the following remarks at the Georgetown Cybersecurity Law Institute in a speech entitled “Failing to Keep Pace: The Cyber Threat and Its Implications for Our Privacy Laws.”

Imagine walking through the front doors of your office on a Thursday morning and immediately receiving a note instructing you not to turn on your work computer for an indefinite period of time. On March 22, this very scenario played out in Atlanta’s City Hall, as employees were handed printed instructions that stated, in bold, “Until further notice, please do not log on to your computer.” At 5:40 that morning, city officials had been made aware that a particular strain of SamSam ransomware had brought municipal services in Atlanta to a halt. This type of ransomware is known for locking up its victims’ files with encryption, temporarily changing those file names to “I’m sorry,” and giving victims a week to pay a ransom.

Residents couldn’t pay for things like water or parking fines. The municipal courts couldn’t validate warrants. Police resorted to writing reports by hand. The city stopped taking employment applications. One city council member lost 16 years of data.

Officials announced that the ransom demand amounted to about $51,000, but have not indicated whether the city paid the ransom. Reports suggest, however, that the city has already spent over $2 million on cybersecurity firms who are helping to restore municipal systems. Atlanta also called in local law enforcement, the FBI, DHS, the Secret Service, and independent forensic experts to help assess what occurred and to protect the city’s networks in the future.

Taking a somewhat relaxed approach to cybersecurity, as the situation in Atlanta seems to have demonstrated, is clearly risky, but unfortunately, it is not uncommon. As our reliance on digital technology has increased, both private companies and public sector entities have experienced crippling cyberattacks that brought down essential services. Atlanta is but one example of the pervasiveness of connected technologies and the widespread impact on our lives when those technologies no longer function correctly.

We’ve reached an inflection point: we now depend upon connected technology to accomplish most of our daily tasks, in both our personal and business lives. At least one forecast predicted that over 20 billion connected devices will be in use by 2020. I hardly need tell the audience at a cybersecurity conference about the nature and scope of our cyber vulnerabilities. What’s surprising is not the extent of this vulnerability, but that it has manifested itself in ways that haven’t yet had dramatic, society-wide effects, although the Atlanta example is surely a good scare. I suspect we all fear that far more crippling and dangerous cyber incidents are likely in our future, since malicious activity is relatively easy and the increasing pace of connected technology simply increases the target size.

So the time has come – indeed, if it has not already passed – to think seriously about some fundamental questions with respect to our reliance on cyber technologies: How much connected technology do we really want in our daily lives? Do we want the adoption of new connected technologies to be driven purely by innovation and market forces, or should we impose some regulatory constraints? These topics are too big for me to confront here today, and some aspects of these questions will be dealt with at this Conference, so I ask you to keep them in mind. For today, I will concentrate on one area where the legal issues raised by these questions coalesce in a significant way – namely, the privacy implications of our increasingly digital lives. You may be wondering why I, as the NSA General Counsel, chose to discuss the privacy aspects of cybersecurity with you here today. As you probably know, at NSA we have two equally important missions: foreign electronic surveillance (or “signals intelligence” to use the legal term) to provide our country’s leaders and the military with the foreign intelligence they need to keep us safe, and a cybersecurity mission, mostly focused on national security systems. I feel NSA can make a contribution to the privacy discussion because we at NSA – as a direct result of our twin missions – are exceptionally knowledgeable about and sensitive to the need to comply with constitutional, statutory and regulatory protections for the privacy and civil liberties of Americans.

Although we continue to forge ahead in the development of new connected technologies, it is clear that the legal framework underpinning those technologies has not kept pace. Despite our reliance on the internet and connected technologies, we simply haven’t confronted, as a US society, what it means to have privacy in a digital age.

If you look at other technologies that were considered both novel and significant, regulations may have lagged behind, but we didn’t let the technology get too far out in advance before laws and societal norms caught up. Take, for example, automobiles. By the time they became pervasive and affordable enough for most families to own, we had already started putting in place laws governing how they should be operated, how they should be safely constructed and maintained, how they should be inspected, and how all of these new rules should be enforced on the federal, state, and local levels. And our society figured out the resultant impact on infrastructure. Some of those decisions were intuitively obvious and others weren’t.

You could make the same analogies with electricity or radio – in each case our society sorted out, in a reasonably timely fashion over a few decades, fundamental questions about public versus private ownership and the extent of substantive regulation, whether for economic or safety or other factors. But not so with cyber. Has there ever been a technology that has become this pervasive, this ubiquitous, and this impactful so quickly? It’s no wonder our societal norms and legal structures – and here I’m mostly referring to our concepts of privacy – have failed to keep pace.

We all recognize that an unprecedented amount of our personal information is now available online. Although this certainly improves efficiency, facilitates transactions, and enables accessibility of stored information, it also means that we’ve made our information vulnerable. It could be exposed through a breach or hack, manipulated by a bad actor, sold to a third party, or, as in the Atlanta example, simply made unavailable.

Ironically, increased cybersecurity, which necessarily includes network monitoring and intrusion detection, also has privacy implications. In order to effectively detect whether an attack is occurring or has occurred, information security systems and professionals need to see the activity happening on their networks. This can include, for example, monitoring what’s being sent or received by individuals using the networks.

Privacy in the US is a notion that has traditionally been tied up in the Fourth Amendment, which states the following:

The right of people to be secure in their persons, houses, papers and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

The Fourth Amendment grew out of the experiences of early Americans, who disagreed with the British Crown’s use of general warrants to undermine political adversaries by searching for writings or publications that decried the British leadership and its policies. Later, in colonial America, general warrants were again used to gain entry to colonist’s homes to search for goods on which taxes had not been paid. As a result, the Fourth Amendment’s concept of privacy revolves largely around the notion of being free from government physical intrusion – which is understandable, given the Amendment’s text and history.

As you know, however, the word “privacy” itself appears nowhere in the text of the Fourth Amendment. Indeed, if you had reviewed the first hundred years’ worth of the Supreme Court’s many occasions to examine the Fourth Amendment, you would have found cases focusing mostly on whether a governmental intrusion constituted a physical “search” or “seizure,” and on whether evidence obtained in violation of the Fourth Amendment could be excluded from a trial. But not a word about a right to privacy as such. Indeed, in a fascinating case clearly reflecting that Fourth Amendment jurisprudence grew out of this very physical sense of searches and seizures – think of federal agents breaking down doors into your bedroom – Chief Justice William Howard Taft said in the 1928 Olmstead case that wiretapping a telephone conversation didn’t amount to a search or seizure, since the evidence in that case was obtained simply by “hearing.” So Mr. Olmstead, a manager of a bootleg operation during the Prohibition Era, lost his battle to suppress evidence of some incriminating phone calls and went to jail.

You would have had to wait until 1967, in Katz v. United States, to find the Supreme Court explicitly saying that the Fourth Amendment embraced a right to privacy and that the surveillance of a phone call was a “search” within that amendment. The FBI suspected Katz of transmitting gambling information to clients in other states over a payphone in a public phone booth, so they attached an eavesdropping device to the outside of the booth. The Supreme Court held that the Fourth Amendment’s protection against unreasonable searches and seizures covered electronic wiretaps of public payphones. Writing for the majority, Justice Potter Stewart held that the Fourth Amendment protects people, not places, and in his concurrence, Justice Harlan fleshed out a test for identifying a reasonable expectation of privacy. This right was further defined in Smith v. Maryland, a 1979 Supreme Court case in which police asked the telephone company to record the numbers dialed from the telephone of Michael Lee Smith, a man suspected of robbing a woman and then placing threatening calls to her. The Supreme Court held in Smith that there is no reasonable expectation of privacy for information (such as the phone numbers you are dialing) that is voluntarily given to third parties (such as telephone companies). Our Fourth Amendment jurisprudence continued to develop in this manner, with courts largely focusing on the type and location of the surveillance taking place, based upon the facts of each particular case, to determine whether a protected privacy interest was implicated. I might add as an aside that almost nowhere in the case law is the real focus on the substance of the communication, except insofar as you get to consider that by reason of wherethe communication occurred.

The Supreme Court’s review of privacy under the Fourth Amendment culminates at present in the Carpentercase, which is currently pending before the Court. Carpenter deals with whether the warrantless search and seizure of historical cell phone records that reveal the location and movement of the cell phone user violates the Fourth Amendment. The Court’s opinion in this case could be structured in one of two ways: either it will be narrowly tailored, with the Court recognizing that the question at issue is tied to specific facts about cell phones, or the opinion will serve as a vehicle to make broader rulings about privacy in this area. Regardless of which structure is chosen and even if the Court writes a brilliant opinion – which I fully expect it to do – the Justices are still evaluating this case through the lens of a changing technology. This opinion, when issued, will likely contribute to our privacy jurisprudence for years to come – and yet, it will be based entirely upon the cell phone technology of today. Twenty years ago, in the era of landline telephones, I’m not certain we could have contemplated the ability to extrapolate a person’s location or movements from where their telephone traveled. In another twenty years, will we still be using a device similar to today’s cell phone? Probably not, and yet we may well be constrained to apply the Carpenter ruling to whatever device or technology we have in our pockets – or perhaps implanted in our brains – at that time.

This case highlights one of the major limitations in applying Fourth Amendment jurisprudence in the digital era to which I wish to draw your attention. Courts are limited to deciding only the case or controversy before them based upon the set of facts presented, and properly so. But because of the manner in which courts must evaluate cyber devices, our privacy laws in this area are generally backward looking, and intuitively, that feels like the wrong approach when addressing a rapidly developing technology. By contrast, that approach does make sense in, say, the area of tort liability. Law students will recall that Mrs. Palsgraf’s injuries on a Long Island Railroad platform were said by Justice Benjamin Cardozo to not be “proximately caused” by the negligence of railway guards. Think about how that legal concept has been applied in the decades since then. Clearly, in the case of tort law, specific cases can yield general principles that are of considerable utility in application to future, but very different, facts.

But I submit that in the case of rapidly developing technology, a case-specific approach, especially one where the legal premise is grounded in the very technology before the court, is inherently problematic. It results in a patchwork quilt of legal precedent about privacy that takes into account only the particular technology directly before the court in each case, which in turn leads to decisions that are sometimes hard to reconcile or are distinguishable only by factors that seem of dubious significance. Most importantly for both the government and the private sector, it yields a set of legal determinations in this area that are, at best, of uneven value in predictive utility. For example, the government needs to know where the lines are to be drawn and equally the private sector wants some degree of certainty as to exactly what will and will not be protected.

Even the Supreme Court has begun to recognize the limitations on its ability to set out a legal framework that suitably marries Fourth Amendment doctrine with emerging technology. In 2012, Justice Sotomayor called into question the third party doctrine’s continued practicality in her concurrence in United States v. Jones, writing that “the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties…is ill suited to the digital age.” In Riley v. California, which was decided in 2014, Chief Justice Roberts wrote that comparing a search through a wallet, purse, or address book to a search of a cell phone “is like saying a ride on horseback is materially indistinguishable from a flight to the moon. Both are ways of getting from point A to point B, but little else justifies lumping them together. Modern cell phones, as a category, implicate privacy concerns far beyond those implicated by the search of a cigarette pack, wallet, or a purse.” At some point, as the Justices have signaled, the quantity of information that an individual shares online or stores in an electronic device may, in aggregate, provide a composite picture of that person that is qualitatively deeper and more insightful than any individual piece of information.

Beyond just its inability as currently applied to keep pace with technology, the Fourth Amendment also suffers from even greater limitations when it comes to protecting your privacy: it is powerless to protect you against any private sector activity. It simply doesn’t apply to private companies. This legal framework is far different from the notion of privacy that has driven European lawmaking and court decisions. Unlike the US, the concept of privacy in Europe focuses instead on the dignity of the person and it very much extends to private sector activity. Traditionally, this has resulted in laxer regulation of government surveillance, but much stricter laws about, for example, data protection, credit reporting, and workplace privacy. Europeans value the right to be left alone and protection from public embarrassment or humiliation. These concepts are so important to Europeans that they were woven into the EU’s Charter of Fundamental Rights, which mandates respect for private and family life and protection of personal data.

As cyber technology has progressed, the European concept of privacy has resulted in relatively strict laws in Europe about the handling of electronic information. For example, the General Data Protection Regulation, or GDPR, which takes effect in just a few days, applies to all EU organizations, any organizations seeking to offer goods or services to EU subjects, and any companies holding or processing the personal data of people in the EU. Companies who fail to comply with the GDPR can incur penalties of up to 4% of annual global income or €million, whichever is greater. The GDPR seeks to strengthen consent rules, requiring that disclosures be clear, intelligible, and easily accessible, and that it be easy for a user to withdraw consent. It ensures the right to be forgotten, which includes erasing data and preventing its further dissemination. The law also mandates privacy by design; data protection must be designed into systems, rather than added on. If a data breach occurs, companies must provide notification regarding within 72 hours.

Whether we applaud it or not, the European, Japanese and other nations’ movement toward comprehensive privacy regulation forces everyone in this digitally connected work to consider how we are going to reconcile different notions of privacy. We’ve been persistent in scrutinizing government intrusion into our daily lives – which is certainly a worthy focus – but have we done so at the expense of our personal dignity or the integrity of our private information, particularly given the rapid pace of technological development? Some might say that Europe’s approach is better suited to manage the privacy challenges posed by the digital age. Let me make crystal clear that the NSA is not, and I am not, advocating for diminished privacy protections or an increased ability to conduct surveillance. Rather, we at NSA feel duty bound to discuss these types of issues, and we’d like to do so transparently and openly to help reach a consensus as to the best approach.

My key point is that I believe we no longer have the luxury of addressing this issue in an ad hoc fashion through our court system, which is largely where our privacy laws have been shaped to date. With Europe pushing ever more aggressive data protection laws, the choice may soon be out of our hands. Companies operating internationally are being forced to adapt their policies and procedures to adhere to regulations implemented in foreign countries. If we want to play a role in shaping those policies to suit our own notions of privacy, we need an overarching effort to address privacy and digital technology here in the US.

The public and private sectors will need to take a holistic approach to addressing privacy concerns associated with our increasing reliance on digital technologies. Similar to the way that new drugs must be reviewed and approved for safety and efficacy before they come to market, perhaps we need laws or regulations requiring review of privacy and cybersecurity safeguards in new connected products before they can be made available to the public. Perhaps, as in Europe, we need stronger notice and consent requirements to regulate how our personal information can be used, shared, or disseminated online. Or perhaps, in some industries, we need to mandate the adoption of low-tech redundancies to safeguard against the loss or manipulation of personal information stored online. This need not necessarily entail government regulation, as industry-generated approaches might be sufficient – but my point is simply that we must have a societal dialogue about how we want to confront the problem.

We also need to consider what privacy means to us here in the US. Because we’ve emphasized freedom from government surveillance in our current privacy regime, and because the fact-specific legal analysis of that surveillance has focused, as discussed above, on the type and location of the surveillance, the same piece of electronic personal information may be protected from interception by the government, but could be disseminated, sold, or otherwise used by a private company with few, if any, limitations. In only narrow areas, such as HIPAA regulations for health records and Fair Credit Reporting Act requirements for financial records, do our laws focus on the type of information at issue. As I noted earlier about the absence of a focus on the content of communications, we could, for example, have a privacy scheme that was dependent in greater part on the substantive nature of communications rather than how communications are collected. To address these inconsistencies that have grown up around our legal privacy framework, we must evaluate carefully the manner in which private companies rely on connected technology to carry out their business activities. This includes considering not only how and when they collect personal information from customers, whether to store it online, and whether and to whom it should be disseminated, but also whether relying solely upon networked devices and systems is even the right choice for certain activities when particular sensitivities may be involved.

We also can’t forget that each one of us has a great deal of personal responsibility for our own private information. Regard less of what steps the government ultimately takes, we need to maintain awareness of and exercise some amount of discretion about how we are exposing our personal data over the internet.

As you spend the next day thinking about the cybersecurity problem and how to address it, please continue to keep in mind the possible ramifications on privacy and consider how our laws should be structured to account for those. Trends point toward the fact that privacy norms are likely to be the most pressing issue with respect to digital technology over the next decade. There are no longer excuses to be made for being caught flat-footed with respect to the security of our networks and systems, particularly when those systems store or transmit personal information. As many have noted, to the extent we do not coalesce around an accepted concept of privacy, and that failure delays or impairs an effective cybersecurity program, then Americans’ privacy is put at even greater risk.

(Via Lawfare – Hard National Security Choices)

Also on:

U.K. Outlines Position on Cyberattacks and International Law:

[…] a big process question is how the U.K. position might catalyze broader diplomatic endeavors to clarify or create rules for cyberspace. Efforts within the U.N. to reach global consensus on these issues have so far failed, mostly because states’ interests are poorly aligned. Expert processes like the one that produced the Tallinn manuals can play useful roles, but they are no substitute for state practice and the articulation and defense of legal interpretations.

(Via Lawfare – Hard National Security Choices)

UPDATE: Isa Qasim’s take is deeper and describes eight key points from the speech:

United Kingdom Att’y General’s Speech on International Law and Cyber: Key Highlights:

First, it is important for states to publicly articulate their understanding of international law, especially in cyberspace. […]

Second, cyber is not lawless. […]

Third, cyber-operations that result in an “equivalent scale” of death and destruction as an armed attack trigger a state’s right to self-defense under the UN Charter’s Article 51. […]

Fourth, the Article 2(7) prohibition on interference in “domestic affairs” (the principle of non-intervention) extends in the cyber context to “operations to manipulate the electoral system to alter the results of an election in another state, intervention in the fundamental operation of Parliament, or in the stability of our financial system.” Wright acknowledges, however, that the exact boundary of this prohibition is not clear.

Fifth, there is no cyber-specific rule prohibiting the “violation of territorial sovereignty” beyond the Article 2(7) prohibition described in the point above.  […] This appears to be a rejection of the Tallinn Manual’s position on the issue, which had articulated an independent international legal rule prohibiting certain cyber operations as a violation of sovereignty.

Sixth, states are not bound to give prior notification of countermeasures when “responding to covert cyber intrusion.” […]

Seventh, there is no legal obligation to publicly disclose the information underlying a state’s attribution of hostile cyber-activity to a particular actor or state. Similarly, there is no universal obligation to publicly attribute hostile cyber activity suffered.

Eighth, a victim state does not have free rein to determine attribution for a malicious cyber operation before taking a countermeasure. Wright stated that “the victim state must be confident in its attribution,” and he added later, “Without clearly identifying who is responsible for hostile cyber activity, it is impossible to take responsible action in response.” This view contrasts with other writings in this field (see Sean Watts’ article at Just Security).

 

(Via Just Security)

Also on:

I enjoyed and learned from 100 Years of Feynman, which starts from his eponymous formula and evolves into these tips for solving physics problems:

  1. Read the question! Some students give solutions to problems other than that which is posed. Make sure you read the question carefully. A good habit to get into is first to translate everything given in the question into mathematical form and define any variables you need right at the outset. Also drawing a diagram helps a lot in visualizing the situation, especially helping to elucidate any relevant symmetries.
  2. Remember to explain your reasoning when doing a mathematical solution. Sometimes it is very difficult to understand what students are trying to do from the maths alone, which makes it difficult to give partial credit if they are trying to the right thing but just make, e.g., a sign error.
  3. Finish your solution appropriately by stating the answer clearly (and, where relevant, in correct units). Do not let your solution fizzle out – make sure the marker knows you have reached the end and that you have done what was requested. In other words, finish with a flourish!

(Via In The Dark)

For InfoSec we can extrapolate three similar tips for engaging with clients, either our internal ones or with external:

  1. Read the RFP/RFI! Listen to the customer! Write down, in your own simple words, your understanding of the client’s request. Communicate it back to them to make sure the understanding is as complete as possible.
  2. When delivering the response/proposal/etc. make sure you “connect the dots” between the client’s request and your solution. Make sure you account for and document assumptions. Explain why the proposal is the way it is.
  3. Finish your response appropriately by stating the answer clearly. Do not let your solution fizzle out – make sure the marker knows you have reached the end and that you have done what was requested. In other words, finish with a flourish!

Item 1 reminds me of a recent almost bad event at work. A potential client reached out about a RFP. They were looking for a security solution with a specific scope and desired outcome. We had a meeting with the client about their goals and objectives. They were clear and precise.

Skip ahead less than one week and suddenly a few leaders in my organization decided to make our RFP response something completely different. My vocal dissents were vetoed. The proposal proceeded with this alternate option. It was as if the client came to our restaurant to eat dinner and we decided to sell them recipe books instead.

Worse, there was nothing in this new approach that was truly new – every piece was obviously recycled generic sales material.

The client was not amused. When we met again the client shut down all extraneous-to-their-request discussions and materials. Since some of the team had not abandoned answering the RFP directly, we were able to pivot and still make a strong proposal.

Another recent proposal I worked on illustrates doing all three items well. The client clearly stated their goals in conversation but their RFP was mostly untethered to the goals, almost as if two different teams drafted each independently. Subsequent client conversations gave us what we needed to form a more complete understanding of the business needs.

The proposal was large compared to the RFP, but the space was needed to completely connect the dots between the client’s broad & disconnected needs and how we would deliver them for the desired business outcome. The response included all of the Who-What-Where-When-Why-How structures to clearly communicate our solution.

There is no shortage of experts in this field. By and large we all think we are one, so we rush to solution without always listening and understanding. Taking a page out of Richard Feynman’s approach to solving physics problems can help address such failings.

Also on:

Appliance Companies Are Lobbying to Protect Their DRM-Fueled Repair Monopolies

The bill (HB 4747) would require electronics manufacturers to sell replacement parts and tools, to allow independent repair professionals and consumers to bypass software locks that are strictly put in place to prevent “unauthorized” repair, and would require manufacturers to make available the same repair diagnostic tools and diagrams to the general public that it makes available to authorized repair professionals. Similar legislation has been proposed in 17 other states, though Illinois has advanced it the furthest so far.

Companies such as Apple and John Deere have fought vehemently against such legislation in several states, but the letters, sent to bill sponsor David Harris and six other lawmakers and obtained by Motherboard, show that other companies are fighting against right to repair as well.

(Via Motherboard)

The right to repair used to be assumed. I remember working on my grandfather’s car with my Dad. I remember changing oil and tires and brakes and head units and shocks and mufflers, &t for that and other cars.And I wasn’t (and still am not) a car guy.

I built and fixed computers when replaceable parts were the norm.

My Dad, members of my family, and people with whom I went to university worked on farms and ranches & regularly repaired the heavy equipment.These were the real instances of duct tape and baling wire.

How about early the early telephone system, which sometimes used barbed wire stretched along fences in rural communities?

We’re not in the early telephone days. We’re in a world where companies can prevent their customers from having agency over products they purchase. Companies can put their customers at risk and not allow the very same customers to protect themselves or even be able to figure out if they’re at risk in the first place.

Also on:

Yahoo gets $35 million slap on wrist for failing to disclose colossal 2014 data breach

The SEC forced Yahoo to pay $35 million in penalties to settle charges that it misled investors. The breach has been widely publicized and is considered one of the largest data breaches on record.

Yahoo’s operating business, now known as Altaba, was acquired last year by Verizon for $4 billion.

What would have been paid under GDPR? $198M if this article is correct.

Calling this a “slap on the wrist” is an insult to wrist slaps everywhere.

Also on:

With a few exceptions, InfoSec podcasts sound the same to me as they did in 2014, both in production quality and in content.

There are two daily shows: SANS ISC Storm Cast and the Cyberwire. They run the gamut – SANS has a brief unpolished production sense and the Cyberwire is perhaps overproduced and over sponsored. Both provide solid daily content. I’m happy to skip both show’s “research” component.

And then there’s the rest.

Most non-vendor podcasts fall into two general categories: echo chambers and interviews.

The “echo chambers”, essentially panel shows full of inside jokes, are mostly gone from my pod catcher. Their production quality is close to zero and they’re mostly op-ed (opinion & editorial) with no counter argument. On PVCSec we tried and mostly failed to counter the standard InfoSec podcast.

The interview shows can be better. The production quality tends to be higher. Several make the interview more about the show host/interviewer and less about the interviewee. Sponsored shows are just that.

There is a third category: “NPR”-style free podcasts. These are the ones that talk about topics most other typical security podcasts miss – legal, governmental, and diplomatic.

Here’s what I’m catching:

If your InfoSec podcast is not on my list and you want it on there, let me know why I should include it.

Also on:

Casino Gets Hacked Through Its Internet-Connected Fish Tank Thermometer

Nicole Eagan, the CEO of cybersecurity company Darktrace, told attendees at an event in London on Thursday how cybercriminals hacked an unnamed casino through its Internet-connected thermometer in an aquarium in the lobby of the casino.

According to what Eagan claimed, the hackers exploited a vulnerability in the thermostat to get a foothold in the network. Once there, they managed to access the high-roller database of gamblers and “then pulled it back across the network, out the thermostat, and up to the cloud.”
(Via Hacker News)

I didn’t get a chance to write about this when it came out, but it’s dissemination came at an opportune moment. About 1 hour earlier I was using the Target breach as an example of third-party risks.

This story made an excellent follow-up.

Also on:

Evaluating the U.K.’s ‘Active Cyber Defence’ Program:

In November 2016, the U.K. government its Active Cyber Defence (ACD) program with the intention of tackling “in a relatively automated [and transparent] way, a significant proportion of the cyber attacks that hit the U.K.” True to their word, a little over a year on, last week the U.K.’s National Cyber Security Centre (NCSC) published a (over 60 pages long) of their progress to date. The report itself is full of technical implementation details. But it’s useful to cut through the specifics to explain exactly what ACD is and highlight its successes—how the program could benefit the United States as well.

There are three defining features of the ACD program: government-centered action, intervention, and transparency.

(Via Lawfare – Hard National Security Choices)

Read the article for a nice summary of the report, including the section towards the end that talks to potential benefits for the U.S.

https://www.lawfareblog.com/todays-revolution-cybersecurity-and-international-order

This is a review of Lucas Kello’s The Virtual Weapon and International Order (Yale University Press, 2017):

The questions that Kello’s proposals raise simply prove his point about the need for interdisciplinary discussions to tackle the multifaceted challenges that cybersecurity poses. The book’s three-part typology of technological revolution will be particularly helpful in framing future discussions of cybersecurity both within and outside of international relations. And it can also be deployed to assess future technological developments. As Kello notes, “the distinguishing feature of security affairs in the current epoch is not the existence of a revolution condition but the prospect that it may never end” (257). Cyberweapons are today’s revolution, but tomorrow will surely bring another.

More of a political science perspective.

From Hacker News:

A team of security researchers—which majorly focuses on finding clever ways to get into air-gapped computers by exploiting little-noticed emissions of a computer’s components like light, sound and heat—have published another research showcasing that they can steal data not only from an air gap computer but also from a computer inside a Faraday cage.

Fascinating research for sure. If you happen to be one of the few working in an environment where air-gapping and Faraday cages are common, this highlights that they are not 100% effective in isolation (no pun intended). This is a reminder of the value of good security hygiene, physical and analog and digital, and occasional validation of assumptions.

For the other 99.999% of security professionals, there are more practical and pragmatic risks requiring addressing with a higher return on investment. This is a reminder of the value of good security hygiene, physical and analog and digital, and occasional validation of assumptions.

See what I did there?

See Also:

TechRepublic

Infosecurity