Much of what’s here I advocated in my previous professional life:

A SCADA environment (Supervisory Control and Data Acquisition) is unlike a conventional IT network in that it provides interconnectedness between industrial systems such as robots, valves, thermal or chemical sensors, command and control systems and HMI (Human Machine Interface) systems, rather than desktops. These environments monitor, manage and administer critical infrastructures in various fields such as transport, nuclear, electricity, gas, water, etc.

Historically, these SCADA control systems have used a dedicated set of communication protocols but as technology and industrial architectures have evolved, these same industrial systems are all interconnected via a conventional IP network. The problem of course is not the use of the conventional IP but rather potentially vulnerable environments such as an unpatched Windows operating system on an HMI platform. Reducing down time is sometimes justification enough to postpone patching on these systems, making SCADA environments potential targets for cybercriminals.

via Fortinet Blog | News and Threat Research Security 101: Securing SCADA Environments.

Read on for their recommendations.

The report provides the outlines of two tools, a suggested Review Process and proposed Development Framework to help boards, senior managers and information teams in organisations that would like to review their information security strategies and governance arrangements.

Since its launch in March this year, the DGSF actively engaged with civil servants, cyber specialists and technology providers to help guide the development of the Forum and to assist in quality assuring the work produced through the initiative. The report identifies four high priority areas, for government to address as it continues to make greater use of technology to meet austerity targets and improve the delivery of digital public services:

via Recommendations for strengthening cyber security policies.

You’ll have to click-through to read the recommendations, but any seasoned InfoSec professional can guess what they are.

Securing data can be hard work. It can be complicated. It can be expensive. And then sometimes you see people putting so little effort into it that there’s just no excuse.

An example of this was sent to me by a reader. In anticipation of new gun control laws scheduled to take effect October 1, tens of thousands of citizens of Maryland applied for gun permits, which requires a background check.

The Maryland State Police, charged with performing the background checks, don’t have the resources to do it soon enough, and, according to the Baltimore Sun, “Gov. Martin O’Malley said … that the state is mustering all necessary resources” to complete the task in time.

“Mustering all necessary resources” in this case means “cutting corners.”

First the state scanned the forms. Then, in order to expand access to the data necessary to perform the background checks to over 200 data entry personnel in non-law enforcement agencies, the state set up a publicly-accessible web site with a single shared username and password.

The data entered in the site included driver’s license numbers, social security numbers, addresses and other personally identifying information.

via Maryland state security sloppiness exposes personal data | ZDNet.

There’s an old phrase: “garbage in, garbage out”. I’m wondering if “Personally Identifiable Information” (PIA) should replace “garbage” going forward.

What irks me about these situations is that the same government that puts the protection measures into place often isn’t held to the same protections. These days it seems like governments and their contractors are the ones most likely to end up on the front page with an easily preventable information disclosure.

Perhaps this is yet another example of the public sector in need of disinfecting daylight.

GFI Software announced the findings of an extensive independent research project looking at end user use of mobile devices at work and in their daily commute to and from the workplace, which revealed that commuters are using free, unsecured and unknown Wi-Fi services for accessing sensitive company data in greater numbers.

The survey of 1,001 UK office workers with a tablet or smartphone who travel to and from work on a train, bus or tube was carried out by Opinion Matters, and revealed not only that mobile devices and using data services are firmly entrenched as the primary activity of the average commuter, but also that commuters and their employers are falling foul of data security issues, as well as heightened risk of physical crime.

100% of the survey respondents acknowledged that they used open, public Wi-Fi connections at least once a week to carry out work-related tasks such as sending and receiving email, reviewing and editing documents and logging into other company servers and storage repositories.

On average, users connected to public Wi-Fi to do work and access work systems 15 times a week, putting company data and passwords at risk from packet sniffing and other forms of traffic interception.

via Travelers regularly connect to free, unsecure Wi-Fi networks.

Mobile users, especially those that travel regularly, are prime targets in any enterprise. Security education needs to start with these users but often aren’t. Heavy travelers tend toward high-ranking managers or corporate officers. They tend towards:

  • Security breeches are something that happens to other people
  • I’m too important
  • Nothing bad ever happens to me

The coddling nature of many corporate IT departments to the higher-ups ultimately lead to major security breaches. The “velvet glove” approach to executives encourages the sense of invincibility that leads to a major security breach.

IT departments would do better by treating all users as adults and professionals able to handle direction and constructive criticism.

By extension, a manager or corporate officer – made aware of the real threat – will be more likely to fire up the VPN than surf the unprotected wifi.

Your mileage may vary.

What is your take?

This is really interesting research: “Stealthy Dopant-Level Hardware Trojans.” Basically, you can tamper with a logic gate to be either stuck-on or stuck-off by changing the doping of one transistor. This sort of sabotage is undetectable by functional testing or optical inspection. And it can be done at mask generation — very late in the design process — since it does not require adding circuits, changing the circuit layout, or anything else. All this makes it really hard to detect.

The paper talks about several uses for this type of sabotage, but the most interesting — and devastating — is to modify a chip’s random number generator. This technique could, for example, reduce the amount of entropy in Intel’s hardware random number generator from 128 bits to 32 bits. This could be done without triggering any of the built-in self-tests, without disabling any of the built-in self-tests, and without failing any randomness tests.

via Schneier on Security: Surreptitiously Tampering with Computer Chips.

Assume that it’s time for Bob’s performance review.

Bob’s boss says he’s a great addition to the team. Easy to work with!

And the sales numbers? Hot mama, Bob’s smokin’! Mr. Bob surely has worked himself toward a big, fat raise!

Or not. Bob would have gotten a raise, that is, but he got fooled by a phishing email and unwittingly invited the bad guys in through the front door, torpedoing Widget Industries Ltd’s multimillion-dollar investment in security systems.

Fiction! But can you imagine if this were really the way employees were assessed? They answer a phishing scam email, they trigger a major security breach, and then they’re held accountable?

via Should employees be punished for sloppy cyber security? [POLL] | Naked Security.

A thought experiment, sure, but one that leads in some interesting directions.

The joy of out-of-date Java:

As predicted at the end of 2012 and proved by the ever expanding use of exploit kits, vulnerabilities in popular and widespread software such as Java and Adobe’s Acrobat Reader and Flash top the list of the most exploited by cyber crooks.

Zero-day vulnerabilities are less of a problem than old ones – in fact, given that many people still use older, vulnerable software versions of the software, wielding exploits for zero-days is practically unnecessary for your average cyber crook that goes after money.

via Attacks targeting unsupported Java 6 are on the rise.

I ran into an instance of someone running Java 5, which is akin to your second cousin calling you about a problem he’s having on Windows ’98.

Apple has distributed a list of security fixes in the just-released iOS 7 software update. And it’s as long and encompassing as you’d imagine any major platform update would be. I haven’t seen them online yet, so I’m reproducing it here for anyone who’s urgently interested. When/if Apple posts it to their knowledge base, we’ll update and link out.

via Apple details security fixes in iOS 7. And there’s a ton of them! | iMore.

I haven’t updated any of my devices yet, and I doubt I’ll go down the iPhone 5* path. I’m happy Apple addressed security issues. I hope they’ll backport some of these for devices that can’t run iOS7.

I hadn’t considered implications of InfoSec standards compliance from the business-side like insurance. Yet another topic I want to read up on.

If NIST came up with a new standard for cybersecurity, would your organization be insurable for cyber risks when measured against that standard? This was a leading topic of discussion in Dallas last week at the latest in a series of workshops attempting to fine tune the proposed NIST cybersecurity framework (we have discussed previous CSF meetings on We Live Security here and also here, plus a podcast here).

Of course, NIST is a standards agency, not an insurance or enforcement agency. But NIST is within Commerce, and it does purport to provide standards which are widely accepted across an industry by, for example, insurance companies who are looking for some way to measure whether your business stacks up to the “gold standard” and charge you premiums accordingly. At the moment, many companies should be able to qualify for policies (according to at least one panelist), but insurance companies seem keenly interested in certain key indicators, like whether your corporate culture is proactive or reactive with respect to emerging security issues. Do you stay on top of change, or take a more passive stand-back-and-watch approach when it comes to security? The answers to these questions could factor into the rates you pay for cyber insurance (here’s an example of such insurance, offered by AIG, and by ACE USA).

via NIST cybersecurity framework: Your insurance company is watching – We Live Security.

What are your thoughts? Do you have reliable resources for more information?

Internet regulators are pushing a controversial plan to restrict public access to WHOIS Web site registration records. Proponents of the proposal say it would improve the accuracy of WHOIS data and better protect the privacy of people who register domain names. Critics argue that such a shift would be unworkable and make it more difficult to combat phishers, spammers and scammers.

A working group within The Internet Corporation for Assigned Names and Numbers (ICANN), the organization that oversees the Internet’s domain name system, has proposed scrapping the current WHOIS system — which is inconsistently managed by hundreds of domain registrars and allows anyone to query Web site registration records. To replace the current system, the group proposes creating a more centralized WHOIS lookup system that is closed by default.

via WHOIS Privacy Plan Draws Fire — Krebs on Security.

I didn’t even realize this was in the works. I’ll comment more once I learn more about it. I recommend giving this a good read and check out the comments.