Crappy IoT on the high seas: Holes punched in hull of maritime security:

Years-old security issues mostly stamped out in enterprise technology remain in maritime environments, leaving ships vulnerable to hacking, tracking and worse.

A demo at the Infosecurity Europe conference in London by Ken Munro and Iian Lewis of Pen Test Partners (PTP) demonstrated multiple methods to interrupt the shipping industry. Weak default passwords, failure to apply software updates and a lack of encryption enable a variety of attacks.

(Via The Register – Security)

Vulnerable ship systems: Many left exposed to hacking:

 

“Ship security is in its infancy – most of these types of issues were fixed years ago in mainstream IT systems,” Pen Test Partners’ Ken Munro says, and points out that the advent of always-on satellite connections has exposed shipping to hacking attacks.

 

 

(Via Help Net Security)

Maritime navigation hack has potential to wreak havoc in English channel:

 

As reported by the BBC, security researcher Ken Munro from Pen Test Partners has discovered that a ship navigation system called the Electronic Chart Display (Ecdis) can be compromised, potentially to disasterous effect.

 

Ecdis is a system commonly used in the shipping industry by crews to pinpoint their locations through GPS, to set directions, and as a replacement to pen-and-paper charts.

 

The system is also touted as a means to reduce the workload on navigators by automatically dealing with route planning, monitoring, and location updates.

 

However, Munro suggests that a vulnerability in the Ecdis navigation system could cause utter chaos in the English channel should threat actors choose to exploit it.

The vulnerability, when exploited, allows attackers to reconfigure the software to shift the recorded location of a ship’s GPS receiver by up to 300 meters.

 

 

(Via Latest Topic for ZDNet in security)

I’ve been talking with companies in this space about these types of issues. While Munro’s research is telling, this is not shocking.

It does very nicely illustrate the real values in good penetration testing: challenging assumptions, taking nothing for granted, and divorcing motive from threat.

For example, the 300 meter location discrepancy could have nothing to do with the shipping company or the ship itself. It could be used by a crypto mining concern looking to delay the arrival of new GPUs for a rival firm. This type of attack could be part of a larger series of attacks, subtile enough that further investigation would be unlikely (as opposed to the English Channel scenario in the ZDNet article), and could reap substantial benefits for the crypto mining concern.

I believe it to be a war of pretexts, a war in which the true motive is not distinctly avowed, but in which pretenses, after-thoughts, evasions and other methods are employed to put a case before the community which is not the true case.

DANIEL WEBSTER: Speech in Springfield, Mass., Sept. 29, 1847

An Example of Deterrence in Cyberspace:

In 2016, the US was successfully deterred from attacking Russia in cyberspace because of fears of Russian capabilities against the US.

I have two citations for this. The first is from the book Russian Roulette: The Inside Story of Putin’s War on America and the Election of Donald Trump, by Michael Isikoff and David Corn. Here’s the quote:

The principals did discuss cyber responses. The prospect of hitting back with cyber caused trepidation within the deputies and principals meetings. The United States was telling Russia this sort of meddling was unacceptable. If Washington engaged in the same type of covert combat, some of the principals believed, Washington’s demand would mean nothing, and there could be an escalation in cyber warfare. There were concerns that the United States would have more to lose in all-out cyberwar.

“If we got into a tit-for-tat on cyber with the Russians, it would not be to our advantage,” a participant later remarked. “They could do more to damage us in a cyber war or have a greater impact.” In one of the meetings, Clapper said he was worried that Russia might respond with cyberattacks against America’s critical infrastructure — and possibly shut down the electrical grid.

The second is from the book The World as It Is, by President Obama’s deputy national security advisor Ben Rhodes. Here’s the New York Times writing about the book.

Mr. Rhodes writes he did not learn about the F.B.I. investigation until after leaving office, and then from the news media. Mr. Obama did not impose sanctions on Russia in retaliation for the meddling before the election because he believed it might prompt Moscow into hacking into Election Day vote tabulations. Mr. Obama did impose sanctions after the election but Mr. Rhodes’s suggestion that the targets include President Vladimir V. Putin was rebuffed on the theory that such a move would go too far.

When people try to claim that there’s no such thing as deterrence in cyberspace, this serves as a counterexample.

Tags: , , ,

(Via Schneier on Security)

Well said and cited.

Is Your SOC Flying Blind?, (Sun, Jun 3rd):

After you have finished impressing your VIPs, what actionable information should be displayed in your SOC to help them respond to threats in your environment?

Consider spending time this week ensuring your SOC wall is populated with meaningful screens that add value to your SOC by asking these questions.

  • Which security controls are not sending data to your SOC?
  • Would your SOC know when your most critical systems stopped sending their logs?
  • What is the baseline of traffic volume in and out of your sensitive network zones?
  • What is the health status of your security agents?

Share what you find valuable on your SOC wall!

(Via SANS Internet Storm Center, InfoCON: green)

Typical mistakes of SOCs – forgetting the audience, not accommodating multiple audiences, and not providing content tailored to each audience. Metrics, analytics, dashboards, and reporting are every bit important as any other SOC function.

The Bleak State of Federal Government Cybersecurity | WIRED

It’s a truism by now that the federal government struggles with cybersecurity, but a report recent report by the White House’s Office of Management and Budget reinforces the dire need for change across dozens of agencies. Of the 96 federal agencies it assessed, it deemed 74 percent either “At Risk” or “High Risk,” meaning that they need crucial and immediate improvements.

While the OMB findings shouldn’t come as a complete shock, given previous bleak assessments—not to mention devastating government data breaches—the stats are jarring nonetheless. Not only are so many agencies vulnerable, but over half lack even the ability to determine what software runs on their systems. And only one in four agencies could confirm that they have the capability to detect and investigate signs of a data breach, meaning that the vast majority are essentially flying blind. “Federal agencies do not have the visibility into their networks to effectively detect data exfiltration attempts and respond to cybersecurity incidents,” the report states bluntly.

Perhaps most troubling of all: In 38 percent of government cybersecurity incidents, the relevant agency never identifies the “attack vector,” meaning it never learns how a hacker perpetrated an attack. “That’s definitely problematic,” says Chris Wysopal, CTO of the software auditing firm Veracode. “The whole key of incident response is understanding what happened. If you can’t plug the hole the attacker is just going to come back in again.”

(Via Wired)

This isn’t just my tax $ failing to protect me, it’s all Americans and residents and taxpayers whose tax money fails to protect them as well.

Makes one think more critically about the Executive Branch deciding that there is no need for key CyberSecurity jobs including the Coordinator position. It also makes one wonder if the States, like California, New York, and Texas, can together force better Federal cybersecurity through legal action.

Cyber security: We need a better plan to deter hacker attacks says US:

The US needs to fundamentally rethink its strategies for stopping cyber attacks and should develop a tailored approach to deterring each of its key adversaries, according to a new government report.

The report published by the US State Department — like a recent paper on botnets — comes in response to an executive order signed by President Donald Trump last year, which called for a report “on the nation’s strategic options for deterring adversaries and better protecting the American people from cyber threats.”

The report said that while the US has become dependent upon sophisticated networked information systems, its rivals have been learning to exploit that dependence to “steal from Americans, disrupt their lives, and create insecurity domestically and instability internationally.”

The cyber threat posed by rival states — and by Russia, China, Iran and North Korea in particular — is often alluded to by intelligence agencies, but the US and its allies have struggled to find a way to deter these cyber intrusions.

The unclassified cyber-deterrence overview published by the State Department doesn’t mention particular countries, but said that strategies for deterring malicious cyber activities “require a fundamental rethinking”. The report said that the US has made efforts to promote a framework for “responsible state behaviour in cyberspace”, but noted that this has not stopped state-sponsored cyber incidents.

 

“The United States and its likeminded partners must be able to deter destabilizing state conduct in cyberspace,” the State Department warned.

Of course, the US has plenty of military muscle should it come to full-on cyberwarfare, but it’s much harder to tackle cyber attacks that don’t necessarily deserve an armed response — which make up the majority of attacks.

 

The report said the US should develop a broader menu of consequences that it can impose following a significant cyber incident. The US should also take steps to make it easier to prove who is behind cyber attacks, it said.

Another big problem is the poor state of cyber security. “Efforts to deter state and non-state actors alike are also hindered by the fact that, despite significant public and private investments in cybersecurity, finding and exploiting cyber vulnerabilities remains relatively easy,” the report said.

“Credibly demonstrating that the United States is capable of imposing significant costs on those who carry out such activities is indispensable to maintaining and strengthening deterrence,” the report added.

According to the State Department, the three key elements of cyber deterrence should include:

  • Creating a policy for when the United States will impose consequences: The policy should provide criteria for the types of malicious cyber activities that the US government will seek to deter. The outlines of this policy must be communicated publicly and privately in order for it to have a deterrent effect.
  • Developing a range of consequences: There should be “swift, costly, and transparent consequences” that the US can impose in response to attacks below the threshold of the use of force.
  • Building partnerships: Other states should work in partnership with the US through intelligence sharing or supporting claims of attribution.

(Via Latest Topic for ZDNet in security)

Curious what your take is on this, Dear Friends.

I’m not sure how the State Department, the U.S. government’s diplomats, think that this kind of response is workable diplomatically. Maybe it is in the report, which I have yet to read. But who needs context to respond?

All Women on Deck at RESET Cyber Conference

With more than 15 female experts in cybersecurity scheduled to speak on the evolving cyber threat landscape, RESET, hosted by BAE Systems, claims to be challenging the status quo with its all-female speaker lineup.

Scheduled for 14 June at the Kennedy Lecture Theatre, University College London (UCL), the conference is open to all security professionals and will “provide in-depth knowledge of destructive cyber-attacks and criminal operations, threat hunting and strategy, and human centric security. In panel discussions, we consider public and private roles in defending cyber space and the risks of securing the un-securable as new technologies emerge.”

What is unique about this event is the speaker lineup. BAE Systems threat intelligence analysts Kirsten Ward and Saher Naumaan have launched the event not only to bring professionals together to engage in a discussion about the evolving threat landscape, but also in part to showcase the impressive women who are often not invited to speak at industry conferences.

Click through to get all the details.

NSA General Counsel Glenn Gerstell Remarks to Georgetown Cybersecurity Law Institute:

On Wednesday, NSA General Counsel Glenn Gerstell delivered the following remarks at the Georgetown Cybersecurity Law Institute in a speech entitled “Failing to Keep Pace: The Cyber Threat and Its Implications for Our Privacy Laws.”

Imagine walking through the front doors of your office on a Thursday morning and immediately receiving a note instructing you not to turn on your work computer for an indefinite period of time. On March 22, this very scenario played out in Atlanta’s City Hall, as employees were handed printed instructions that stated, in bold, “Until further notice, please do not log on to your computer.” At 5:40 that morning, city officials had been made aware that a particular strain of SamSam ransomware had brought municipal services in Atlanta to a halt. This type of ransomware is known for locking up its victims’ files with encryption, temporarily changing those file names to “I’m sorry,” and giving victims a week to pay a ransom.

Residents couldn’t pay for things like water or parking fines. The municipal courts couldn’t validate warrants. Police resorted to writing reports by hand. The city stopped taking employment applications. One city council member lost 16 years of data.

Officials announced that the ransom demand amounted to about $51,000, but have not indicated whether the city paid the ransom. Reports suggest, however, that the city has already spent over $2 million on cybersecurity firms who are helping to restore municipal systems. Atlanta also called in local law enforcement, the FBI, DHS, the Secret Service, and independent forensic experts to help assess what occurred and to protect the city’s networks in the future.

Taking a somewhat relaxed approach to cybersecurity, as the situation in Atlanta seems to have demonstrated, is clearly risky, but unfortunately, it is not uncommon. As our reliance on digital technology has increased, both private companies and public sector entities have experienced crippling cyberattacks that brought down essential services. Atlanta is but one example of the pervasiveness of connected technologies and the widespread impact on our lives when those technologies no longer function correctly.

We’ve reached an inflection point: we now depend upon connected technology to accomplish most of our daily tasks, in both our personal and business lives. At least one forecast predicted that over 20 billion connected devices will be in use by 2020. I hardly need tell the audience at a cybersecurity conference about the nature and scope of our cyber vulnerabilities. What’s surprising is not the extent of this vulnerability, but that it has manifested itself in ways that haven’t yet had dramatic, society-wide effects, although the Atlanta example is surely a good scare. I suspect we all fear that far more crippling and dangerous cyber incidents are likely in our future, since malicious activity is relatively easy and the increasing pace of connected technology simply increases the target size.

So the time has come – indeed, if it has not already passed – to think seriously about some fundamental questions with respect to our reliance on cyber technologies: How much connected technology do we really want in our daily lives? Do we want the adoption of new connected technologies to be driven purely by innovation and market forces, or should we impose some regulatory constraints? These topics are too big for me to confront here today, and some aspects of these questions will be dealt with at this Conference, so I ask you to keep them in mind. For today, I will concentrate on one area where the legal issues raised by these questions coalesce in a significant way – namely, the privacy implications of our increasingly digital lives. You may be wondering why I, as the NSA General Counsel, chose to discuss the privacy aspects of cybersecurity with you here today. As you probably know, at NSA we have two equally important missions: foreign electronic surveillance (or “signals intelligence” to use the legal term) to provide our country’s leaders and the military with the foreign intelligence they need to keep us safe, and a cybersecurity mission, mostly focused on national security systems. I feel NSA can make a contribution to the privacy discussion because we at NSA – as a direct result of our twin missions – are exceptionally knowledgeable about and sensitive to the need to comply with constitutional, statutory and regulatory protections for the privacy and civil liberties of Americans.

Although we continue to forge ahead in the development of new connected technologies, it is clear that the legal framework underpinning those technologies has not kept pace. Despite our reliance on the internet and connected technologies, we simply haven’t confronted, as a US society, what it means to have privacy in a digital age.

If you look at other technologies that were considered both novel and significant, regulations may have lagged behind, but we didn’t let the technology get too far out in advance before laws and societal norms caught up. Take, for example, automobiles. By the time they became pervasive and affordable enough for most families to own, we had already started putting in place laws governing how they should be operated, how they should be safely constructed and maintained, how they should be inspected, and how all of these new rules should be enforced on the federal, state, and local levels. And our society figured out the resultant impact on infrastructure. Some of those decisions were intuitively obvious and others weren’t.

You could make the same analogies with electricity or radio – in each case our society sorted out, in a reasonably timely fashion over a few decades, fundamental questions about public versus private ownership and the extent of substantive regulation, whether for economic or safety or other factors. But not so with cyber. Has there ever been a technology that has become this pervasive, this ubiquitous, and this impactful so quickly? It’s no wonder our societal norms and legal structures – and here I’m mostly referring to our concepts of privacy – have failed to keep pace.

We all recognize that an unprecedented amount of our personal information is now available online. Although this certainly improves efficiency, facilitates transactions, and enables accessibility of stored information, it also means that we’ve made our information vulnerable. It could be exposed through a breach or hack, manipulated by a bad actor, sold to a third party, or, as in the Atlanta example, simply made unavailable.

Ironically, increased cybersecurity, which necessarily includes network monitoring and intrusion detection, also has privacy implications. In order to effectively detect whether an attack is occurring or has occurred, information security systems and professionals need to see the activity happening on their networks. This can include, for example, monitoring what’s being sent or received by individuals using the networks.

Privacy in the US is a notion that has traditionally been tied up in the Fourth Amendment, which states the following:

The right of people to be secure in their persons, houses, papers and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

The Fourth Amendment grew out of the experiences of early Americans, who disagreed with the British Crown’s use of general warrants to undermine political adversaries by searching for writings or publications that decried the British leadership and its policies. Later, in colonial America, general warrants were again used to gain entry to colonist’s homes to search for goods on which taxes had not been paid. As a result, the Fourth Amendment’s concept of privacy revolves largely around the notion of being free from government physical intrusion – which is understandable, given the Amendment’s text and history.

As you know, however, the word “privacy” itself appears nowhere in the text of the Fourth Amendment. Indeed, if you had reviewed the first hundred years’ worth of the Supreme Court’s many occasions to examine the Fourth Amendment, you would have found cases focusing mostly on whether a governmental intrusion constituted a physical “search” or “seizure,” and on whether evidence obtained in violation of the Fourth Amendment could be excluded from a trial. But not a word about a right to privacy as such. Indeed, in a fascinating case clearly reflecting that Fourth Amendment jurisprudence grew out of this very physical sense of searches and seizures – think of federal agents breaking down doors into your bedroom – Chief Justice William Howard Taft said in the 1928 Olmstead case that wiretapping a telephone conversation didn’t amount to a search or seizure, since the evidence in that case was obtained simply by “hearing.” So Mr. Olmstead, a manager of a bootleg operation during the Prohibition Era, lost his battle to suppress evidence of some incriminating phone calls and went to jail.

You would have had to wait until 1967, in Katz v. United States, to find the Supreme Court explicitly saying that the Fourth Amendment embraced a right to privacy and that the surveillance of a phone call was a “search” within that amendment. The FBI suspected Katz of transmitting gambling information to clients in other states over a payphone in a public phone booth, so they attached an eavesdropping device to the outside of the booth. The Supreme Court held that the Fourth Amendment’s protection against unreasonable searches and seizures covered electronic wiretaps of public payphones. Writing for the majority, Justice Potter Stewart held that the Fourth Amendment protects people, not places, and in his concurrence, Justice Harlan fleshed out a test for identifying a reasonable expectation of privacy. This right was further defined in Smith v. Maryland, a 1979 Supreme Court case in which police asked the telephone company to record the numbers dialed from the telephone of Michael Lee Smith, a man suspected of robbing a woman and then placing threatening calls to her. The Supreme Court held in Smith that there is no reasonable expectation of privacy for information (such as the phone numbers you are dialing) that is voluntarily given to third parties (such as telephone companies). Our Fourth Amendment jurisprudence continued to develop in this manner, with courts largely focusing on the type and location of the surveillance taking place, based upon the facts of each particular case, to determine whether a protected privacy interest was implicated. I might add as an aside that almost nowhere in the case law is the real focus on the substance of the communication, except insofar as you get to consider that by reason of wherethe communication occurred.

The Supreme Court’s review of privacy under the Fourth Amendment culminates at present in the Carpentercase, which is currently pending before the Court. Carpenter deals with whether the warrantless search and seizure of historical cell phone records that reveal the location and movement of the cell phone user violates the Fourth Amendment. The Court’s opinion in this case could be structured in one of two ways: either it will be narrowly tailored, with the Court recognizing that the question at issue is tied to specific facts about cell phones, or the opinion will serve as a vehicle to make broader rulings about privacy in this area. Regardless of which structure is chosen and even if the Court writes a brilliant opinion – which I fully expect it to do – the Justices are still evaluating this case through the lens of a changing technology. This opinion, when issued, will likely contribute to our privacy jurisprudence for years to come – and yet, it will be based entirely upon the cell phone technology of today. Twenty years ago, in the era of landline telephones, I’m not certain we could have contemplated the ability to extrapolate a person’s location or movements from where their telephone traveled. In another twenty years, will we still be using a device similar to today’s cell phone? Probably not, and yet we may well be constrained to apply the Carpenter ruling to whatever device or technology we have in our pockets – or perhaps implanted in our brains – at that time.

This case highlights one of the major limitations in applying Fourth Amendment jurisprudence in the digital era to which I wish to draw your attention. Courts are limited to deciding only the case or controversy before them based upon the set of facts presented, and properly so. But because of the manner in which courts must evaluate cyber devices, our privacy laws in this area are generally backward looking, and intuitively, that feels like the wrong approach when addressing a rapidly developing technology. By contrast, that approach does make sense in, say, the area of tort liability. Law students will recall that Mrs. Palsgraf’s injuries on a Long Island Railroad platform were said by Justice Benjamin Cardozo to not be “proximately caused” by the negligence of railway guards. Think about how that legal concept has been applied in the decades since then. Clearly, in the case of tort law, specific cases can yield general principles that are of considerable utility in application to future, but very different, facts.

But I submit that in the case of rapidly developing technology, a case-specific approach, especially one where the legal premise is grounded in the very technology before the court, is inherently problematic. It results in a patchwork quilt of legal precedent about privacy that takes into account only the particular technology directly before the court in each case, which in turn leads to decisions that are sometimes hard to reconcile or are distinguishable only by factors that seem of dubious significance. Most importantly for both the government and the private sector, it yields a set of legal determinations in this area that are, at best, of uneven value in predictive utility. For example, the government needs to know where the lines are to be drawn and equally the private sector wants some degree of certainty as to exactly what will and will not be protected.

Even the Supreme Court has begun to recognize the limitations on its ability to set out a legal framework that suitably marries Fourth Amendment doctrine with emerging technology. In 2012, Justice Sotomayor called into question the third party doctrine’s continued practicality in her concurrence in United States v. Jones, writing that “the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties…is ill suited to the digital age.” In Riley v. California, which was decided in 2014, Chief Justice Roberts wrote that comparing a search through a wallet, purse, or address book to a search of a cell phone “is like saying a ride on horseback is materially indistinguishable from a flight to the moon. Both are ways of getting from point A to point B, but little else justifies lumping them together. Modern cell phones, as a category, implicate privacy concerns far beyond those implicated by the search of a cigarette pack, wallet, or a purse.” At some point, as the Justices have signaled, the quantity of information that an individual shares online or stores in an electronic device may, in aggregate, provide a composite picture of that person that is qualitatively deeper and more insightful than any individual piece of information.

Beyond just its inability as currently applied to keep pace with technology, the Fourth Amendment also suffers from even greater limitations when it comes to protecting your privacy: it is powerless to protect you against any private sector activity. It simply doesn’t apply to private companies. This legal framework is far different from the notion of privacy that has driven European lawmaking and court decisions. Unlike the US, the concept of privacy in Europe focuses instead on the dignity of the person and it very much extends to private sector activity. Traditionally, this has resulted in laxer regulation of government surveillance, but much stricter laws about, for example, data protection, credit reporting, and workplace privacy. Europeans value the right to be left alone and protection from public embarrassment or humiliation. These concepts are so important to Europeans that they were woven into the EU’s Charter of Fundamental Rights, which mandates respect for private and family life and protection of personal data.

As cyber technology has progressed, the European concept of privacy has resulted in relatively strict laws in Europe about the handling of electronic information. For example, the General Data Protection Regulation, or GDPR, which takes effect in just a few days, applies to all EU organizations, any organizations seeking to offer goods or services to EU subjects, and any companies holding or processing the personal data of people in the EU. Companies who fail to comply with the GDPR can incur penalties of up to 4% of annual global income or €million, whichever is greater. The GDPR seeks to strengthen consent rules, requiring that disclosures be clear, intelligible, and easily accessible, and that it be easy for a user to withdraw consent. It ensures the right to be forgotten, which includes erasing data and preventing its further dissemination. The law also mandates privacy by design; data protection must be designed into systems, rather than added on. If a data breach occurs, companies must provide notification regarding within 72 hours.

Whether we applaud it or not, the European, Japanese and other nations’ movement toward comprehensive privacy regulation forces everyone in this digitally connected work to consider how we are going to reconcile different notions of privacy. We’ve been persistent in scrutinizing government intrusion into our daily lives – which is certainly a worthy focus – but have we done so at the expense of our personal dignity or the integrity of our private information, particularly given the rapid pace of technological development? Some might say that Europe’s approach is better suited to manage the privacy challenges posed by the digital age. Let me make crystal clear that the NSA is not, and I am not, advocating for diminished privacy protections or an increased ability to conduct surveillance. Rather, we at NSA feel duty bound to discuss these types of issues, and we’d like to do so transparently and openly to help reach a consensus as to the best approach.

My key point is that I believe we no longer have the luxury of addressing this issue in an ad hoc fashion through our court system, which is largely where our privacy laws have been shaped to date. With Europe pushing ever more aggressive data protection laws, the choice may soon be out of our hands. Companies operating internationally are being forced to adapt their policies and procedures to adhere to regulations implemented in foreign countries. If we want to play a role in shaping those policies to suit our own notions of privacy, we need an overarching effort to address privacy and digital technology here in the US.

The public and private sectors will need to take a holistic approach to addressing privacy concerns associated with our increasing reliance on digital technologies. Similar to the way that new drugs must be reviewed and approved for safety and efficacy before they come to market, perhaps we need laws or regulations requiring review of privacy and cybersecurity safeguards in new connected products before they can be made available to the public. Perhaps, as in Europe, we need stronger notice and consent requirements to regulate how our personal information can be used, shared, or disseminated online. Or perhaps, in some industries, we need to mandate the adoption of low-tech redundancies to safeguard against the loss or manipulation of personal information stored online. This need not necessarily entail government regulation, as industry-generated approaches might be sufficient – but my point is simply that we must have a societal dialogue about how we want to confront the problem.

We also need to consider what privacy means to us here in the US. Because we’ve emphasized freedom from government surveillance in our current privacy regime, and because the fact-specific legal analysis of that surveillance has focused, as discussed above, on the type and location of the surveillance, the same piece of electronic personal information may be protected from interception by the government, but could be disseminated, sold, or otherwise used by a private company with few, if any, limitations. In only narrow areas, such as HIPAA regulations for health records and Fair Credit Reporting Act requirements for financial records, do our laws focus on the type of information at issue. As I noted earlier about the absence of a focus on the content of communications, we could, for example, have a privacy scheme that was dependent in greater part on the substantive nature of communications rather than how communications are collected. To address these inconsistencies that have grown up around our legal privacy framework, we must evaluate carefully the manner in which private companies rely on connected technology to carry out their business activities. This includes considering not only how and when they collect personal information from customers, whether to store it online, and whether and to whom it should be disseminated, but also whether relying solely upon networked devices and systems is even the right choice for certain activities when particular sensitivities may be involved.

We also can’t forget that each one of us has a great deal of personal responsibility for our own private information. Regard less of what steps the government ultimately takes, we need to maintain awareness of and exercise some amount of discretion about how we are exposing our personal data over the internet.

As you spend the next day thinking about the cybersecurity problem and how to address it, please continue to keep in mind the possible ramifications on privacy and consider how our laws should be structured to account for those. Trends point toward the fact that privacy norms are likely to be the most pressing issue with respect to digital technology over the next decade. There are no longer excuses to be made for being caught flat-footed with respect to the security of our networks and systems, particularly when those systems store or transmit personal information. As many have noted, to the extent we do not coalesce around an accepted concept of privacy, and that failure delays or impairs an effective cybersecurity program, then Americans’ privacy is put at even greater risk.

(Via Lawfare – Hard National Security Choices)

Also on:

U.K. Outlines Position on Cyberattacks and International Law:

[…] a big process question is how the U.K. position might catalyze broader diplomatic endeavors to clarify or create rules for cyberspace. Efforts within the U.N. to reach global consensus on these issues have so far failed, mostly because states’ interests are poorly aligned. Expert processes like the one that produced the Tallinn manuals can play useful roles, but they are no substitute for state practice and the articulation and defense of legal interpretations.

(Via Lawfare – Hard National Security Choices)

UPDATE: Isa Qasim’s take is deeper and describes eight key points from the speech:

United Kingdom Att’y General’s Speech on International Law and Cyber: Key Highlights:

First, it is important for states to publicly articulate their understanding of international law, especially in cyberspace. […]

Second, cyber is not lawless. […]

Third, cyber-operations that result in an “equivalent scale” of death and destruction as an armed attack trigger a state’s right to self-defense under the UN Charter’s Article 51. […]

Fourth, the Article 2(7) prohibition on interference in “domestic affairs” (the principle of non-intervention) extends in the cyber context to “operations to manipulate the electoral system to alter the results of an election in another state, intervention in the fundamental operation of Parliament, or in the stability of our financial system.” Wright acknowledges, however, that the exact boundary of this prohibition is not clear.

Fifth, there is no cyber-specific rule prohibiting the “violation of territorial sovereignty” beyond the Article 2(7) prohibition described in the point above.  […] This appears to be a rejection of the Tallinn Manual’s position on the issue, which had articulated an independent international legal rule prohibiting certain cyber operations as a violation of sovereignty.

Sixth, states are not bound to give prior notification of countermeasures when “responding to covert cyber intrusion.” […]

Seventh, there is no legal obligation to publicly disclose the information underlying a state’s attribution of hostile cyber-activity to a particular actor or state. Similarly, there is no universal obligation to publicly attribute hostile cyber activity suffered.

Eighth, a victim state does not have free rein to determine attribution for a malicious cyber operation before taking a countermeasure. Wright stated that “the victim state must be confident in its attribution,” and he added later, “Without clearly identifying who is responsible for hostile cyber activity, it is impossible to take responsible action in response.” This view contrasts with other writings in this field (see Sean Watts’ article at Just Security).

 

(Via Just Security)

Also on:

I enjoyed and learned from 100 Years of Feynman, which starts from his eponymous formula and evolves into these tips for solving physics problems:

  1. Read the question! Some students give solutions to problems other than that which is posed. Make sure you read the question carefully. A good habit to get into is first to translate everything given in the question into mathematical form and define any variables you need right at the outset. Also drawing a diagram helps a lot in visualizing the situation, especially helping to elucidate any relevant symmetries.
  2. Remember to explain your reasoning when doing a mathematical solution. Sometimes it is very difficult to understand what students are trying to do from the maths alone, which makes it difficult to give partial credit if they are trying to the right thing but just make, e.g., a sign error.
  3. Finish your solution appropriately by stating the answer clearly (and, where relevant, in correct units). Do not let your solution fizzle out – make sure the marker knows you have reached the end and that you have done what was requested. In other words, finish with a flourish!

(Via In The Dark)

For InfoSec we can extrapolate three similar tips for engaging with clients, either our internal ones or with external:

  1. Read the RFP/RFI! Listen to the customer! Write down, in your own simple words, your understanding of the client’s request. Communicate it back to them to make sure the understanding is as complete as possible.
  2. When delivering the response/proposal/etc. make sure you “connect the dots” between the client’s request and your solution. Make sure you account for and document assumptions. Explain why the proposal is the way it is.
  3. Finish your response appropriately by stating the answer clearly. Do not let your solution fizzle out – make sure the marker knows you have reached the end and that you have done what was requested. In other words, finish with a flourish!

Item 1 reminds me of a recent almost bad event at work. A potential client reached out about a RFP. They were looking for a security solution with a specific scope and desired outcome. We had a meeting with the client about their goals and objectives. They were clear and precise.

Skip ahead less than one week and suddenly a few leaders in my organization decided to make our RFP response something completely different. My vocal dissents were vetoed. The proposal proceeded with this alternate option. It was as if the client came to our restaurant to eat dinner and we decided to sell them recipe books instead.

Worse, there was nothing in this new approach that was truly new – every piece was obviously recycled generic sales material.

The client was not amused. When we met again the client shut down all extraneous-to-their-request discussions and materials. Since some of the team had not abandoned answering the RFP directly, we were able to pivot and still make a strong proposal.

Another recent proposal I worked on illustrates doing all three items well. The client clearly stated their goals in conversation but their RFP was mostly untethered to the goals, almost as if two different teams drafted each independently. Subsequent client conversations gave us what we needed to form a more complete understanding of the business needs.

The proposal was large compared to the RFP, but the space was needed to completely connect the dots between the client’s broad & disconnected needs and how we would deliver them for the desired business outcome. The response included all of the Who-What-Where-When-Why-How structures to clearly communicate our solution.

There is no shortage of experts in this field. By and large we all think we are one, so we rush to solution without always listening and understanding. Taking a page out of Richard Feynman’s approach to solving physics problems can help address such failings.

Also on:

Appliance Companies Are Lobbying to Protect Their DRM-Fueled Repair Monopolies

The bill (HB 4747) would require electronics manufacturers to sell replacement parts and tools, to allow independent repair professionals and consumers to bypass software locks that are strictly put in place to prevent “unauthorized” repair, and would require manufacturers to make available the same repair diagnostic tools and diagrams to the general public that it makes available to authorized repair professionals. Similar legislation has been proposed in 17 other states, though Illinois has advanced it the furthest so far.

Companies such as Apple and John Deere have fought vehemently against such legislation in several states, but the letters, sent to bill sponsor David Harris and six other lawmakers and obtained by Motherboard, show that other companies are fighting against right to repair as well.

(Via Motherboard)

The right to repair used to be assumed. I remember working on my grandfather’s car with my Dad. I remember changing oil and tires and brakes and head units and shocks and mufflers, &t for that and other cars.And I wasn’t (and still am not) a car guy.

I built and fixed computers when replaceable parts were the norm.

My Dad, members of my family, and people with whom I went to university worked on farms and ranches & regularly repaired the heavy equipment.These were the real instances of duct tape and baling wire.

How about early the early telephone system, which sometimes used barbed wire stretched along fences in rural communities?

We’re not in the early telephone days. We’re in a world where companies can prevent their customers from having agency over products they purchase. Companies can put their customers at risk and not allow the very same customers to protect themselves or even be able to figure out if they’re at risk in the first place.

Also on: