This is a review of Lucas Kello’s The Virtual Weapon and International Order (Yale University Press, 2017):

The questions that Kello’s proposals raise simply prove his point about the need for interdisciplinary discussions to tackle the multifaceted challenges that cybersecurity poses. The book’s three-part typology of technological revolution will be particularly helpful in framing future discussions of cybersecurity both within and outside of international relations. And it can also be deployed to assess future technological developments. As Kello notes, “the distinguishing feature of security affairs in the current epoch is not the existence of a revolution condition but the prospect that it may never end” (257). Cyberweapons are today’s revolution, but tomorrow will surely bring another.

More of a political science perspective.

From Hacker News:

A team of security researchers—which majorly focuses on finding clever ways to get into air-gapped computers by exploiting little-noticed emissions of a computer’s components like light, sound and heat—have published another research showcasing that they can steal data not only from an air gap computer but also from a computer inside a Faraday cage.

Fascinating research for sure. If you happen to be one of the few working in an environment where air-gapping and Faraday cages are common, this highlights that they are not 100% effective in isolation (no pun intended). This is a reminder of the value of good security hygiene, physical and analog and digital, and occasional validation of assumptions.

For the other 99.999% of security professionals, there are more practical and pragmatic risks requiring addressing with a higher return on investment. This is a reminder of the value of good security hygiene, physical and analog and digital, and occasional validation of assumptions.

See what I did there?

See Also:



spectre and the end of langsec — wingolog:

The basis of language security is starting from a programming language with a well-defined, easy-to-understand semantics. From there you can prove (formally or informally) interesting security properties about particular programs. For example, if a program has a secret k, but some untrusted subcomponent C of it should not have access to k, one can prove if k can or cannot leak to C. This approach is taken, for example, by Google’s Caja compiler to isolate components from each other, even when they run in the context of the same web page.

But the Spectre and Meltdown attacks have seriously set back this endeavor.

I suggest reading the post to get the full take.

Some of my time is spent talking with clients about secure development life cycle practices and tools to help bolster security early in the process. I’ve abstractly reflected on how I was taught/learned to code using what is referred to as the Unix approach – small, well understood, behaviorally consistent components brought together to make a more complex system.

This was in the days before these large package management systems.

I was reminded of the infamous 11-line JavaScript NPM package, a package that implemented a “left-pad” function, which the developer unpublished. Literally thousands of other packages relied on this simple one, causing the whole dependency “house of cards” to collapse. See for a reminder of the story.

Now think about this: that was a software-based issue that, while hugely impactful, was easy to fix (select 11 lines of code, copy, paste). What happens when hardware isn’t behaviorally consistent or is so fundamentally flawed its insecurity isn’t fixable?

Taking me back even further I’m reminded of the various Intel floating point issues of the 80’s and 90’s

I drifted off topic.

What are your thoughts?

From Quartz:

Here’s how much time a single American spends on social media and TV in a year:
608 hours on social media
1642 hours on TV
Wow. That’s 2250 hours a year spent on TRASH. If those hours were spent reading instead, you could be reading over 1,000 books a year!

The numbers are compelling. Arguably, even if one reads within one’s own bubble they will be exposed to thoughts and ideas outside of their preconceived notions simply because no one is 100% dogmatic in exactly the same way.

The impetus for the article is this quote from Warren Buffet, very much de rigueur:

Read 500 pages like this every day. That’s how knowledge works. It builds up, like compound interest. All of you can do it, but I guarantee not many of you will…

I’m on board. While it may seem obvious I will say it anyway: You don’t have to read. Audio books are just as good though harder to underline meaningful passages.

My path and recommendation to you, Dear Reader, is a bit different: Reduce the number of books per year but add in reading the capital-N News daily.

I subscribe to and read the New York Times, the Washington Post (JP), the Japan Times (with which I get the New York Times), and the Guardian (JP Weekly). I also read the Atlantic Monthly (JP) and am thinking about picking up the Economist again, which I used to always look forward to reading each week. Yes, I am that cool.

My big change is moving my news consumption to the evening once I arrive home. I find I get too wound up/depressed/angry when I read the News in the morning, thus ruining my day. Tech news, security news, and bits I need for work I read anytime.

Also I make use of podcasts: NPR hourly news update & Up First, NHK English news, the various APM Marketplaces, The CyberWire, the SANS Internet Storm Center Stormcast, The Daily from the New York Times, and the BBC World Service Newshour. I play these at 1.5 speed or faster with the two security podcasts, NPR hourly update, and the NHK news at the top. I start playing it as I leave the office. By the time home and finished with dinner the podcasts have updated me nicely.

I’m in the process of reevaluating my news feeds. The method is much the same as evaluating Cyber Security threat intelligence feeds. Is it:

  • Timely?
  • Accurate?
  • Actionable?
  • Updated?
  • Adding value?

I categorize my information intake in several ways:

  • News
  • Analysis, Editorial & Opinion (most blogs, podcasts, and personal social media feeds)
  • Technical
  • Press releases

With all of this, I find myself overwhelmed with data. Much is redundant and not adding value. Some adds value but isn’t timely. Some opinion is fopped of as news. Branded content permeates.

What sources do you use? How to you consume them? How do you value them?

From Security Affairs:

Google expert discovered a new stack-based overflow vulnerability in AMD CPUs that could be exploited via crafted EK certificates,
Chip manufacturers are in the tempest, while media are continues sharing news about the Meltdown and Spectre attacks, the security researcher at Google’s cloud security team Cfir Cohen disclosed a stack-based overflow vulnerability in the fTMP of AMD’s Platform Security Processor (PSP).

The vulnerability affects 64-bit x86 processors, the AMD PSP provides administrative functions similar to the Intel Management Engine.

We’re going to see a lot more investigation into hardware vulnerabilities. It won’t be pretty, I expect.

What researchers discover will not be easy or inexpensive to fix. My hope is that hardware manufacturers realize it is less expensive and better for their reputation to improve their processes in relation to secure-by-design.

The Strange WannaCry Attribution:

I’ve been trying to figure out why the U.S. government thought it was useful to attribute the “WannaCry” attack to North Korea …

… I must be missing something here. Probably what I am missing is that the public attribution sends an important signal to the North Koreans about the extent to which we have penetrated their cyber operations and are watching their current cyber activities. But that message could have been delivered privately, and it does not explain why the United States delayed public attribution at least six months after its internal attribution, and two months after the U.K. had done so publicly. Perhaps the answer to the delay question, and another thing I am missing, is that the public attribution is part of larger plan related to a planned attack on North Korea because of its nuclear threat. Bossert’s unconvincing op-ed and incoherent press conference wouldn’t support either interpretation; and if either interpretation is right, it still comes at a cost to general deterrence. But perhaps, surely, hopefully, there is more here than meets the eye.

(Via Lawfare – Hard National Security Choices)

This WannaCry Attribution was a head scratcher for me, too. Listeners of the late lamented PVC Security podcast know that I am generally not a fan of attribution, or more specifically see only limited real life usefulness for 97% of companies’ and individuals’ security. For governments, intelligence agencies, the military, and law enforcement there is more value, but how much value so far after the fact?

This piece by Jack Goldsmith lays out pretty much every issue I have with this plus provides something of a timeline for those for whom this is ancient history (in security terms, anyway).

Got a theory or opinion on this?