Why you need a digital forensics team (and the skills to look for) by Karen Epper Hoffman:

In a world where enterprises are embracing the fact that breaches are not a matter of ‘if, but when,’ it is becoming increasingly important to develop internal and external resources to investigate and oversee the impact of attacks after they have happened.

Yes. This is where my opinion diverges.

Digital forensics is a relatively recent skills concentration

It’s not. I took a digital forensics class with SANS about 10 years ago. And when I hired someone for that role in Canada about 9 years ago I had many qualified candidates with experience to choose from.

one that does not necessarily require the same talents, expertise or background as other cybersecurity positions.

Also wrong. While forensics requires its own skill set, thinking it is divorced from the rest of security is absurd. Context is important, and not understanding security will make analysis ineffectual.

And while more enterprises are recognizing that they need such talent on the back-end, as it were, there are still holdouts that are entirely focused on detection and prevention, to their detriment.

Wrong. Just wrong. That’s like not getting checkups or taking medication and then, when illness happens, spending time and money to track down who in your family tree made you prone to the heart disease you need major surgery to fix.

“I think this is actually a misconception [that] organizations do not necessarily need to build out digital forensics teams in-house,” says Sean Mason, director of incident response for Cisco Security Services, adding that Cisco is building out its ownforensic capability via its incident response services team. A key problem, Mason says, is “there is not enough talent to go around and, generally speaking, most organizations don’t have enough demand to require a full-time team on staff.”

Some companies and organizations absolutely should have this capability in house — large financial, energy, and government organizations leap to mind — but the bulk of companies either don’t need the capability as it would take resources away from higher ROI functions or could make no use of the data. Also, digital forensics is almost begging to be done as-a-Service. (Full disclosure: IBM employs me, offers this function as a service, and I consult with companies about this. My views on this are mine and not my employer’s. Cisco is an IBM partner btw.)

As I said, most companies aren’t mature enough to make use of the information even if they have it. If your security posture is already weak, what counter measures can you hope to employ with such data?

Munish Walther-Puri, chief research officer at dark web monitoring company Terbium Labs, points out that digital forensics requires a combination of “investigation, intelligence, and innovation.”
Digital forensics teams are a complement to any IT team “because they figure out the who, when, when, where and why a bad actor came into the system, says Avani Desai, president of audit and accounting firm Schellman & Co. “They help paint a picture of the incident and provide guidance on how to mitigate the risk of that happening again.” The forensics teams also take past data and processes and builds upon it to make sure they have the tools to handle issues that are getting significantly tougher to solve, Desai adds.

Let’s say you figure out the “who, when, when (sic), where and why a bad actor came into the system”. The where bit might be actionable, but the rest? As an understaffed and underfunded IT or Security team, how will the knowledge that Russian organized crime attacked your company on a Tuesday a year ago change anything for you?

Darien Kindlund, vice president of technology for Insight Engines, a provider of natural language search technology, points out that digital forensics is “an important pillar in any security operations team, in order to assess and understand tools, tactics, and procedures (TTPs) used by attackers to compromise a firm. That way, the firm can stop future breaches using these same TTPs by new attackers. A firm’s ability to understand how these attacks work is directly tied to how effective their digital forensics team is.”

Again, in some contexts digital forensics can be useful, even valuable. But 99% of organizations and companies are better off hiring it out as-a-Service.

Time is not addressed here: digital forensics takes time. Time is not a security practitioner’s friend. By the time an in-house team provides actionable intelligence, it is probably too late. A service provider might be faster as they leverage what they see across multiple clients, but still requires time.

My digital forensics criticisms also apply to a lesser extent to threat intelligence. What use are Indicators of Compromise (IOC) if you’re unable to act on them?

There is still too much focus on attribution. Better security hygiene returns more value.

Here is a good guide: if you can’t make use of threat intelligence then digital forensics is nothing but show.

Also, I disagree with the article’s implied definition of digital forensics. It is more than just outsider attack attribution. It is very valuable for dealing with malicious insiders, again after the fact. If your organization is litigious, such a team is invaluable.

Regardless, forensics plays a valuable role. As an internal team, a managed service, or an organizational goal, digital forensics can enrich a security team’s intelligence.

Inspiration from Rhiannon Giddens, Arthur Schopenhauer, James Clear, and More by Trent Hamm

5. Aristotle Onassis on leadership

“One of the tests of leadership is the ability to recognize a problem before it becomes an emergency.” – Aristotle Onassis
The key to this, in my opinion, is reflection. Sit down regularly, take clear stock of your own life and the things you’re responsible for in life, and ask yourself honestly whether those things are headed in the direction that you want. Are there any issues that could blow up?
One of the biggest themes in my life over the past few years has been to establish patterns of reflection and meditation. I’ve found that the more I reflect on the things I learn and observe, the things I’ve done in the past in terms of what worked and what didn’t, and the things I know are to come, the better equipped I seem to be in terms of handling life in a successful manner.
I might see major mistakes without much reflection, but I don’t see minor ones unless I reflect. I might see major problems without much reflection, but I don’t see minor ones that could easily grow into major ones. I might see big things coming up in the future without much reflection, but I don’t consider how to deal with them or see smaller ones coming up without it.

I think the concept of “leading by walking around” is important here. Knowing what your team is worried about or unexpectedly spending time on key. If you’ve a geographically distributed team, it’s harder.A daily scrum of quick hits, with follow up, can help.

Boasting about how many hours you work is a sign of failure by Olivia Goldhill:

Talking about how many hours you work is not impressive. Far from being an indication of industrious achievements or alpha status, it should be seen as a professionally embarrassing sign that, quite frankly, you have nothing else to boast about.
Showing off about overwork is now so ubiquitous it’s difficult to remember a time when lack of sleep and hours spent at the office weren’t talked of with a puff of pride. “We just maximize every hour we can, however we can do it,” Twitter executive chairman Omid Kordestani told the Wall Street Journal (paywall) in 2015, explaining that he became chief executive Jack Dorsey’s driver so they could talk business as they commute. “When you hear the so-called apocryphal stories about Tim Cook coming to work in the wee hours and staying late,” Don Melton, who started Apple’s Safari, told Debug podcast in 2014, “it’s not just some PR person telling you stories to make you think that Apple executives work really hard like that. They really do that,” And, of course, just last month, the patron saint of work boasts, Tesla chief executive Elon Musk, declared that “nobody ever changed the world on 40 hours a week.” Musk said in November that he worked 120-hour weeks, and on Twitter claimed that 80 to 100 hours per week is necessary to change the world.
As countless studies have shown, this simply isn’t true. Productivity dramatically decreases with longer work hours, and completely drops off once people reach 55 hours of work a week, to the point that, on average, someone working 70 hours in a week achieves no more than a colleague working 15 fewer hours.

Why though, if we know more work doesn’t lead to better results, does anyone perceive overworking as “good”? Western society came to see work as virtuous thanks to Christian notions that work–and, in particular, work that involves suffering–is a noble endeavor that brings people closer to God. Though the religious overtones have since been abandoned, long working hours have retained their status as a token of worth. When Musk says you can only change the world if you work 80 hours a week, he’s not presenting a serious argument, but is making a moral assertion that working more is inherently good. And so, those who boast about work are inadvertently revealing their devotion to an outdated and thoughtless principle. True world leaders don’t need to prove their value by emphasizing their slavish devotion to work. They have better things to do.

It’s interesting to see the difference between Silicon Valley and Japan on working so many hours.

Teaching Cybersecurity Law and Policy: My Revised 62-Page Syllabus/Primer by Robert Chesney:

Cybersecurity law and policy is a fun subject to teach. There is vast room for creativity in selecting topics, readings and learning objectives. But that same quality makes it difficult to decide what to cover, what learning objectives to set, and which reading assignments to use.
With support from the Hewlett Foundation, I’ve spent a lot of time in recent years wrestling with this challenge, and last spring I posted the initial fruits of that effort in the form of a massive “syllabus” document. Now, I’m back with version 2.0.
At 62 pages (including a great deal of original substantive content, links to readings, and endless discussion prompts), it is probably most accurate to describe it as a hybrid between a syllabus and a textbook. Though definitely intended in the first instance to benefit colleagues who teach in this area or might want to do so, I think it also will be handy as a primer for anyone–practitioner, lawyer, engineer, student, etc.–who wants to think deeply about the various substrands of this emergent field and how they relate to one another.
Feel free to make use of this any way you wish. Share it with others who might enjoy it (or at least benefit from it), and definitely send me feedback if you are so inclined ([email protected] or @bobbychesney on Twitter).

I’ve been pouring over this for about a week and am loving the detail. I asked for a better pdf with the diagrams fixed and working html links.

Even if you’re a security professional operating outside the U.S. like me, still get this and read through it. It will trigger local conversations and research.

A Roadmap for Exceptional Access Research by Mayank Varia:

_This is part of a __series of essays__ from the Crypto 2018 Workshop on Encryption and Surveillance._
Over the course of the recurring “crypto wars,” technologists and government officials have debated the social desirability of encryption systems that provide End-to-End (E2E) protection from sender to receiver, as well as the technological viability of building Exceptional Access (EA) encryption systems that provide the government with the selective ability to read decrypted contents. This tussle has left E2E encryption in an unstable equilibrium for the past several decades, with many countries mandating recoverability of plaintext (or threatening to do so) via blunt techniques that are much less safe than EA could be.
These crypto wars have been fought over the past several decades as a _political_ battle. Even though all parties have the common goal of improving public safety, they enter the fray with differing personal and professional biases, which cause them to prioritize either improving digital security to prevent crime or providing tools for law enforcement to investigate crimes with a digital component. Predictably, this politicization has led to dour consequences. Views become polarized into an “us vs. them” mentality, with technologists and government officials both claiming the imperative to act as strongly as the law permits and abhorring any “middle ground.” Illogical arguments promulgate, such as the focus of EA proponents’ on terrorism even though EA is more likely to aid in the investigation of ordinary criminal activity, and the focus of EA opponents on past mistakes that bear no relation to modern proposals. Finally, participants tend to use provocative, alienating rhetoric like “responsible encryption” or “backdoors.”
Rather than continuing to politicize encryption, in this article I propose instead to use the lessons learned from the decades-long crypto policy debate to inform the process of developing EA technology. This article is organized into four parts: (1) reviewing the benefits and risks of an EA encryption system from a policy viewpoint; (2) providing a skeletal definition of the security guarantees that EA encryption should provide in order to mitigate the policy risks; (3) listing several possible capabilities that an EA system might provide in an attempt to identify a minimum viable product together with law enforcement; and (4) constructing policy to revive research into EA’s technology challenges, an area that has been mostly dormant for two decades.
Reviving technological research into EA is essential to improve the state of the policy debate. In the short term, this would provide time and space for technologists to improve upon the state of the art in exceptional access: the reality today is that all existing EA proposals have been insufficiently vetted and, if deployed, would pose systemic risk to the digital ecosystem. More research and engagement between law enforcement and the technical community could change that–but this cannot happen when these groups are pitted against each other in a political debate. In the longer term, developing a variety of modern EA proposals would crystallize and refocus a policy debate currently lacking in technical specifics.
Before continuing, I want to emphasize upfront that many of the policy and technology arguments surrounding encryption are subtle, and the brevity of this article may diminish some of this nuance. For example, I sometimes lump together cryptosystems that protect data in transit with those that protect data at rest, even though EA affects those systems in different ways, and I use the term E2E to denote any cryptosystem that precludes a government or technology intermediary from reading the contents.

1. What are the benefits and risks of exceptional access?

“Finding the right balance between security, privacy and surveillance is a complex problem. The current practice of intelligence agencies adding covert backdoors into system deployed world wide is hugely risky and carries with it a significant potential for collateral damage, since criminals or adversarial states can misuse the backdoors. & What seems clear is that there are no perfect solutions. Governments need to be able to enforce their laws and protect their citizens from adversaries, however, this should not be done at any cost. & Unfortunately, currently the surveillance debate is polarised with absolutes being pushed by both sides. It is highly unlikely that either extreme – total surveillance or total privacy – is good for our society. Finding the right balance should be framed as an ethical debate centred around the potential for collateral damage.”
Governments intermittently clamor for changes to encryption to provide law enforcement access. The current such clamoring comes in the form of Australia’s Assistance and Access Bill, which would “establish frameworks for& industry assistance to law enforcement and intelligence agencies in relation to encryption technologies,” as well as the related Five Eyes Statement of Principles on Access to Evidence and Encryption (though it remains unclear what this statement portends).
In the face of such clamor, it is worth assessing the policy benefits and drawbacks of exceptional access as a concept, independent of the details of any particular EA construction (or in other words, even if there is consensus that the system “works properly”). As with many policy debates, both action and non-action create risks and unintended second-order consequences.
EA systems create various ethical and policy concerns:
* Public safety and national security: A system that permits exceptional access by governments can be more vulnerable to external hacking, insider threat, or human error than the same system incorporating E2E encryption. A government official or employee of a technology intermediary could purposely or inadvertently (e.g., if their computer is compromised or if identification checks fail) betray personally sensitive information to those who might use it for nefarious purposes. Any breach or misuse of an EA system impacts individuals, businesses, and governments equally because everyone uses similar commodity computing devices over a shared network.
* Democratic values: Private speech is essential to a democracy. Because encryption is the primary method of achieving privacy for digital speech, any type of surveillance can have a chilling effect on speech and social behavior.
* International ramifications: As the Communications Assistance for Law Enforcement Act (CALEA) demonstrated, legal and technological requirements within the United States that facilitate government access can be quickly replicated by many other countries around the world, including those that have less concern for their citizens’ public safety and democratic values.
* Alternatives: Any required change to encryption would have limited effectiveness given that E2E encryption schemes already exist and may be legally sold in other countries; it is technologically possible to construct an E2E messaging system starting from only EA components; and other cybersecurity protections, like network anonymity and remote deletion, may also be deployed to thwart government from finding the digital data that it wishes to decrypt.
Conversely, the status quo on encryption also presents several policy concerns along similar dimensions.
* Public safety and national security: Many crimes in the physical world have “a technology and digital component,” according to FBI Director Christopher Wray, and some “occur almost entirely online” (one example is child sexual exploitation). EA could improve deterrence of crimes that primarily occur in the digital world. Additionally, any action taken now to build an EA system may forestall a poorly-thought-out action taken hastily by a government later.
* Democratic values: Because E2E encryption upends some of the norms and established processes for law enforcement investigations that have evolved over centuries of societal input, a democratic society should consciously decide whether to adopt it rather than simply delegating the decision solely to the tech intermediaries.
* International ramifications: More than half of the world’s population currently lives in countries with laws that mandate recoverability of encrypted content. The encryption systems currently deployed in these jurisdictions permit more surveillance with less accountability than a properly-designed EA system might offer.
* Alternatives: The government’s main alternative to EA, lawful hacking, is fraught with its own ethical and policy concerns. It is an imprecise tool, because everyone is vulnerable to their effects; it disincentivizes government to report vulnerabilities to vendors as part of the Vulnerabilities Equities Process, thereby worsening the relationship between government and the tech intermediaries; and it may lack sufficient legal oversight to check its use. To quote Steven Levy, “lawful hacking is techno-capitalism at its shadiest.”
Attempts to balance the relative merits of these competing sets of policy concerns (e.g., within these recent studies) are exceedingly difficult, especially because everyone involved is sincerely trying to improve public safety. The U.S. government is requesting tools that aid law enforcement rather than enable dragnet surveillance, and technology companies promote E2E encryption to protect their customers proactively against crimes like stalking, blackmail and identity theft.
Perhaps the most important action that can be taken to advance the policy debate is to acknowledge the good faith and common interests of everyone involved. As the Encryption Working Group within the House Judiciary and Energy & Commerce Committees astutely observes, it is simultaneously true that:
The widespread adoption of encryption poses a real challenge to the law enforcement community and strong encryption is essential to both individual privacy and national security. A narrative that sets government agencies against private industry, or security interests against individual privacy, does not accurately reflect the complexity of the issue.
Both sides of the Encryption Working Group’s observations can be found even within the U.S. federal government. Within the Justice Department, the last three directors of the FBI and the current deputy attorney general have argued for law enforcement access to encryption systems. Within the intelligence community, several former leaders of intelligence organizations have argued against changes to encryption on national security grounds.
Subsequent sections of this article are written under the assumption that the people broadly decide to explore building an EA encryption system. I follow this approach not to put my thumb on the scale, but instead because there is little that needs to be said or done if society opts for the alternative: E2E encryption, after all, already exists. In the next two sections, I describe how technological choices regarding EA’s security guarantees and access capabilities have nuanced policy implications of their own.

2. What limits on access should EA guarantee?

I suspect that the answer [to exceptional access] is going to come down to how do we create a system where the encryption is as strong as possible, the key is as secure as possible, it is accessible by the smallest number of people possible for a subset of issues that we agree are important.
– President Barack Obama, SXSW 2016
Cryptographers labor intensely over, and rely heavily on, definitions that codify the security requirements and expected capability they want from system features like encryption. Because any specific implementation believed to be secure today can be broken tomorrow, cryptographers emphasize definitions because they guide the search for candidate constructions and allow the community to decide how a system should work (that is, independent of any limits on the current ability to construct it).
Defining the security guarantee for E2E encryption is conceptually straightforward: the intended communicators hold a key that allows them to read messages, and everyone else in the world is equally clueless about the message contents. Put simply, possession of the key is equivalent to the capability to read messages. Defining EA encryption is more nuanced because the system must distinguish between different types of unintended recipients: out of the billions of people in the world who don’t have the cryptographic key, only the government is (sometimes) empowered to read a message anyway. Sadly, throughout the decades of debate about the ethical and moral quandaries surrounding EA, not even a rough consensus has emerged for how EA might formally be defined if it did exist.
At first blush, this may seem like a pedantic concern: why waste precious time and energy on defining EA rather than simply building it? However, from decades of experience, cryptographers have found that definitions provide the only way to understand the true intention behind a construction and thus to test whether a proposed EA system actually meets its goals. They also often outlive any particular construction, and they enable principled policy and moral discussions to occur about the values society wants without getting bogged down by the scientific details of any particular construction.
In this section, I provide a taxonomy of security goals that might be required of an EA system, and I reference existing EA proposals that provide partial progress toward these goals. Several of these security properties (at least somewhat) ameliorate the policy concerns with EA laid out above; however, the international ramifications of exceptional access are irremediable and must be taken into account when deciding whether to adopt an EA system.
* Government-only authorization: EA must distinguish between government actors and non-governmental actors, only providing access to the former. Additionally, the system may distinguish between different government actors and provide them with different levels of access. Existing EA proposals tend to provide access only to one government, using techniques from public key cryptography to distinguish between actors from the intended government versus anyone else; recently, Charles Wright and I proposed a system that uses economics to distinguish nation-states from non-nation-state actors, but does not attempt to distinguish between governments (though it can be composed with other EA proposals that do).
* Targeting: Like warrants, the government’s EA requests should be scoped to specific people and specific data contents. All existing EA proposals are strongly scoped so that governments must know precisely which device, file or communication to target; some even restrict EA to devices within the government’s possession. To handle warrants that are scoped instead based on features within the data contents, EA might be combined with techniques that allow for searching on encrypted data.
* Accountability: To deter opportunities for abuse, people should be able to validate that the government only exercises its EA authority pursuant to lawful processes (e.g., a warrant). This validation may come after a time delay, and it need not require people to see the full warrant or decrypted data. Existing EA proposals show how this validation can be privately accessible by the targets of EA, available to an auditor for oversight purposes or publicly transparent to all.
* Federation: To mitigate insider threats, EA should require approval from multiple actors, ideally in different institutions, to view encrypted data. In particular, the system must not rely upon a single master key. Existing EA proposals use techniques like secret sharing to split any cryptographic material used for access between different government agencies, between the government and a technology intermediary or even between different governments (thereby requiring international cooperation to recover message contents). I would caution, however, that in other contexts tech providers have had difficulty authenticating whether requests originate from valid law enforcement officials and have been duly authorized by court officials.
* Limits on access: To mitigate the chilling effect of surveillance and opportunities for abuse, EA should limit governments only to recover a limited amount of data from a small number of people; furthermore, any effort expended to access content based on one request should provide no benefit toward the next request. Existing EA proposals can limit access using the existing legal system for obtaining warrants as well as a technological limit enforced via a marginal economic cost for each government request using a proof-of-work approach later popularized by Bitcoin.
* Crypto transparency: Following Kerckhoff’s principle, the design and implementation of an EA system should be open for anyone to confirm that it meets the properties listed in this section. Almost all EA proposals are openly (though incompletely) documented; one prominent exception is the U.S. government’s Clipper chip proposal from the 1990s, which relied on a secret design in order to provide compliance.
* Defense in depth: Rather than relying on any single point of trust for a power as strong as EA, the security properties of such a system should be enforced through a variety of mechanisms involving computers (e.g., high-assurance software and trusted hardware) and people (e.g., the legal framework and feedback from society) so that a compromise in one mechanism need not destroy all the above security properties.
To emphasize the final point, the biggest strength of the EA proposals set forward to date is their variety. While no existing proposal is individually strong enough to use in practice today (and indeed several have known flaws or limitations), collectively they offer a set of options that might be composed to provide a viable EA system with defense-in-depth.
There is currently a lack of consensus as to which security properties are desirable socially; people may reasonably argue that my list provides either too few or too many constraints on government power. I welcome such discussion. Indeed, as with all normative debates, discussion on exceptional access is more fruitful and enduring when debate focuses on the capabilities and limits that EA should have, rather than evaluating the strength of any single proposal.

3. What capabilities should exceptional access provide to law enforcement?

Political debate will not make the user versus law-enforcement conflict vanish. Even though some would prefer to not have any form of [exceptional access], the pragmatic view is that reaching some sort of compromise is necessary. & Thus the technical question is to find a solution as palatable as possible to both sides. It should guarantee enough privacy for the individual that people would go along with it, and yet be acceptable to law enforcement too.
– Professors Mihir Bellare and Shafi Goldwasser, CCS 1997
Perhaps the most challenging issue with constructing EA is not providing strong security but rather determining precisely and concretely the contexts under which access should be provided to the government. Here again, EA encryption poses unique challenges: With E2E, the appropriate functionality of encryption is simply that “people with the key should be able to read the data,” whereas with EA, one must also specify the scenarios in which access is granted to the government.
Due to their professional inclination toward secrecy combined with the organizational challenge of accumulating statistics across field offices, the U.S. federal, state and local governments have thus far been unwilling or unable to categorize the cases in which encryption tends to stymie their investigations. Simply stating the number of investigations affected by encryption (whether that number is 1,000 or 8,000) does not inform the discussion of what types of capabilities are being desired. Neither do individual stories of law enforcement being unable to access the devices of deceased victims, a problem that could be addressed instead via key management systems that enable people to delegate data to next of kin.
I list below an attempt at a (likely incomplete) set of considerations that would inform the design of access requirements for EA. Within each dimension, I list possible options from least to most difficult to achieve. I also note the portion of this space that has been explored by existing proposals for exceptional access; unfortunately, this space is largely underexplored to date, likely due to the lack of dialogue between government and the research community.
* Data location: Does the government seek to unlock data at rest on personal devices or the cloud? Alternatively, does it also seek to intercept data sent in transit on an encrypted messaging system or when browsing the web? Most EA proposals to date focus on encryption of data at rest, although some of the techniques might extend to encrypted messaging services.
* Physical vs. remote access: Is it sufficient only to recover data stored on a device in government possession, or should exceptional access also be possible remotely? (Note that this question is independent of the above “data at rest vs. in transit” question.) Some EA proposals necessitate physical access by requiring that certain operations be computed directly on the target device; however, this approach has security concerns of its own, because (from the government’s point of view) sensitive operations are performed on an untrusted device.
* Scope: Is the government’s EA interest limited to encryption provided by default or does it extend to all providers of encryption services, including third-party software? Most existing EA proposals focus on the former.
* Past vs. future: Is it sufficient to access data produced only after the government registers an interest in a target, as in Ian Levy and Crispin Robinson’s potential solution, or must the system retroactively offer decryption capabilities to data encrypted in the past? Most EA proposals provide retroactive decryption of data encrypted in the past, in large part to distinguish EA from alternatives like legal hacking whose effectiveness is predominantly in observing actions of targets in the future.
* Accessible agents: There are thousands of law enforcement organizations and judges in the United States. Does the government plan to make requests from a central location (which might also perform an oversight role), or must an EA system and/or tech intermediaries directly handle requests from every police officer? While some existing proposals explicitly require the tech provider or a government oversight agency to intermediate requests, the majority of proposals do not address this question.
* Compliance: Is it important for the government to determine whether encrypted content properly follows the EA scheme, even before any attempt is made to decrypt the data? If so, then compliance can be determined with cryptographic proofs in software that verify compliance with EA or with the aid of trusted hardware devices. (The Clipper chip sought to do this, though researchers later showed how to bypass compliance.)
* Delay: How much delay is acceptable between when a request is made and when the data is provided? Can the request take multiple days to fulfill, or must the system operate in near-real-time? Most EA proposals assume the need to operate in a “ticking time bomb” scenario in which near-real-time responses are necessary, though a few proposals purposely leverage delays to establish physical control or limit the rate of access.
I caution proponents of EA to view the above text as a set of tradeoffs, not a wishlist. The level of access provided is proportional to the time required to construct, vet and maintain an EA system that provides strong security guarantees such as the ones listed in the previous section. For sufficiently strong access requirements it might be possible to prove the impossibility of simultaneously meeting all access and security goals. On the other extreme, if the government only desires access to files of previously-consenting victims or files stored at rest on a smartphone at the (slow) rate of current software updates, then it may be possible to build a compliant system very soon.
There need not be an immediate “silver bullet” to provide everything the government desires. Instead, researchers can and should design EA systems iteratively, beginning with easier settings. In the final section of this article, I provide policy proposals to spur such EA technological development.

4. What is the path to a future with viable EA systems?

It makes more sense to address any security risks by developing intercept solutions during the design phase, rather than resorting to a patchwork solution when law enforcement comes knocking after the fact.
– Former FBI director James Comey, October 2014 speech at the Brookings Institution
This is the easiest of the four questions to answer, so I provide my response upfront: Congress should prohibit the federal government from requiring that the private sector deploy exceptional access mechanisms. Paradoxically, supporters of EA should endorse such a ban (at least in the short term) as the single best method to resolve the Going Dark problem. Let me explain.
It is currently premature to request either voluntary or mandatory introduction of EA encryption into computers and networks. We simply do not yet have the cryptographic building blocks, the systems security architecture, or the regulatory policy framework necessary to build EA today that meets all or even most of the definitional requirements above. Ergo, time is needed to fuse crypto design, systems development and policy guidelines into a concrete EA regulatory proposal.
But the current state of affairs is even worse than that. Throughout the “crypto wars,” government actors have intermittently made credible threats to seek legislation to mandate EA, even though the crypto and systems communities have declared that it is premature. These threats stymie research into EA itself, because scientists and engineers fear that initial, half-baked approaches might be precipitously thrust upon society. Ergo, not only is there an absence of sound EA systems today, but the existing research is not currently on a trajectory toward producing them.
The best analogies I can find to describe this issue come from FBI Director Wray. At the Aspen Security Forum earlier this year, he said: “We put a man on the moon [and] we have autonomous vehicles. The number of things that are created every day in this country really defies imagination sometimes, and so the idea that we can’t solve this problem as a society, I just don’t buy it.”
In the scientific explorations into the moon expedition and autonomous vehicles, the government did not simply rely on private industry to find and deploy solutions. Instead, government engaged with the scientific research community by cultivating an environment of mutual trust and understanding; clearly defining and articulating the desired objective; providing time and funding to research the technological and policy aspects of the problem; and committing to hold off on deployment until reaching consensus from scientists and engineers that the technology was safe.
All these features are absent in the “crypto wars,” and all of them are within the government’s power to address. The policy debate can be advanced by first acknowledging the good faith and common interests of everyone involved. The FBI, along with local and state law enforcement organizations, can provide concrete knowledge about access mechanisms that could address many of the cases currently stymied due to encryption. Congress can also appropriate funds to jump-start scientific research for exceptional access and design a rigorous policy proposal for encryption regulation. Finally and most importantly, legislatures should impose a moratorium on EA so as to restore trust and foster an environment in which scientists can evolve EA prototypes and run small-scale pilot tests without the concern that a half-baked EA system will suddenly be introduced. That moratorium can subsequently be lifted if and only if EA constructions mature to the point of achieving agreed-upon definitions of security and functionality, and societies decide to use those constructions, cognizant of the benefits and risks of this decision.

Lawfare does some of the best writing about security that’s not the breach du jour or whatever we want to panic about today.

Seriously consider them for your end-of-year (and following years) donation: https://www.lawfareblog.com/support-lawfare

I’m sad that I have to know this is a thing that exists: WSJ website defaced by a fan in ongoing YouTube subscribers battle | ZDNet

Congratulations, you’re getting promoted! You have excelled at the Thing You Do to such a degree that you’ll now be leading a whole team of people who Do That Thing. Very responsibility, much excite.

Okay wait, you may say. That’s cool, but I like Doing the Thing. I’m pretty good at it, and if I’m leading a team, will I still get to do it? Will I still get to perform the work that got me to where I am today?

The short answer is: Yes, you can! If it’s important to you to keep doing some “individual contributor” work as a manager, you can make that happen.

The long answer is: Well, you can. Like, if Mark Zuckerberg wants to go in and make some code changes to Facebook, he has the authority necessary to do that. And reportedly, in frustration with a pet bug or issue, Zuck has been known to bang out a fix and submit a merge request – which then hits a series of roadblocks around coding guidelines, localization, automated testing, and oh god why is this stuff so complicated these days ughhhhh.

And that’s good. It’s helpful for leaders to get their hands dirty from time to time, to get caught up on what their teams are doing, how they’re doing it, and get more context for the detail work involved.

But let’s be honest. Is Mark Zuckerberg’s time best spent mastering Facebook’s latest pull request rules around internationalization flow, or would that same time be better spent, I don’t know, figuring out how Facebook can ruin the world less?

As a manager, you too need to consider these tradeoffs. Yes, you have the ability to dig in and do the work yourself, but you now have a specialer ability: you can multiply your efforts across a whole group. As a leader, you’re in a position to solve bigger problems than you ever could by yourself, since you can deploy the full force of a team.

Leadership Mode Activate – Allen Pike

I could do without the Zuck reference, but the message is sound. This was something I struggled with when I became the manager of not just a team but a large team of people with arguably better skills than mine. Securing budget, running interference, talking with customers, and playing politics (albeit poorly) were more valuable than me changing firewall rules or adding a static route or running down anomalous traffic patterns.

Oh, I could also do without the giant robot analogy.

Still a good article.

An Outcome-Based Analysis of U.S. Cyber Strategy of Persistence & Defense Forward:

The new U.S. Cyber Command (USCYBERCOM) vision and the Department of Defense Cyber Strategy embody a fundamental reorientation in strategic thinking.

With the publication of these documents, as well as 2017 National Security Strategy and the 2018 National Defense Strategy, there is a general conception among experts that the U.S. has, for the first time, articulated a strategy that truly appreciates the unique “symptoms” of cyberspace. The documents recognize that there is a new structural set of dynamics associated with the new domain of cyberspace that has incentivized a new approach to power competition—in particular, that hostile or adversarial behavior below the threshold of armed attack could nevertheless be strategically meaningful (that is, change the balance of power).

Yet most cyber experts have also argued that the ‘medicine’ prescribed by the Defense Department  and USCYBERCOM should be further scrutinized. Indeed, the side effects of the strategy of “persistent engagement” and “defense forward” are still ill-understood. As we have argued elsewhere, a United States that is more powerful in cyberspace does not necessarily mean one that is more stable or secure. More research is required to better understand adversarial adaptive capacity and escalation dynamics.

(Via Lawfare – Hard National Security Choices)

CCleaner 5.50 with new options to control program updates:

A new version of the file cleaning software CCleaner for Windows, version 5.50, features new options to control program updates.

The year 2018 has not been a very pleasant one for Piriform, maker of CCleaner, and Avast, Piriform’s parent company.  The integration of Telemetry collection, first without clear options to disable it and turned on by default, and forced automatic updates to a new version of CCleaner, were two of the major blunders in that year …

Closing Words

One cannot say that Piriform is not trying. The company introduced privacy options in the program after users complained about the new data collecting and the lack of options to turn Telemetry off. Now, after users complained that CCleaner would auto-update itself in September, options are introduced to control these updates in the program.

The ride would have been a lot smoother for Piriform if the company would have introduced these options before it made the changes or pushed the automatic update to CCleaner installations.

Now You: Do you still use CCleaner?

(Via gHacks Technology News)

My answer is no, I do not trust CCleaner or Pinform or Avast. You should make your own decision, but my council is against trusting them.

Why Passion is Overrated (instead, here’s what you should do):

I often hear people say if they only had a real passion, they would be able to follow it, break free from their mundane job and create their dream life.

But what to do when you don’t have a passion? Are you just supposed to wait until it one day magically drops from the sky to rescue you?

I feel there’s this mistaken belief, that some people ‘have a passion’ for something, which enables them to live a fabulous, meaningful life, whereas others don’t and thus are stuck in the hamster wheel.

… What do I love doing I asked myself? I felt completely blank and confused. It didn’t help that well-meaning family member and friends just told me to follow a different passion. What if I didn’t have one?

This is where a lot of people get stuck.

I was certainly stuck until I realised that doing something is better than doing nothing. You learn a lot from doing something. Anything is better than nothing.

(Via Pick the Brain | Motivation and Self Improvement)

Japan impresses me. There are restaurants and shops and ryokan that have been operating for years or decades or longer where they focus on what they do, they take pride in what they do, and they refine how they do what they do in evolutionary rather than revolutionary ways. Watch Jiro Dreams of Sushi for an extreme version of this mindset. This is changing, of course.

Back on the late, lamented PVC Security podcast we talked often about “finding your passion”. It sounds nice, but I think we did our listeners a bit of a disservice. We did talk about how to find your passion a little, but we failed to properly acknowledge other paths and the realities of life.