Study: Enterprises Fail To Test End User Awareness Training, Password

Security awareness programs and strong password policies are standard procedure in most organizations, but most enterprises don’t do enough to reinforce them, according to a new survey.

According to a study published Friday by security firm Rapid7 (PDF), most companies don’t go back and test their employees to see whether they have learned from security training and policy.

via Study: Enterprises Fail To Test End User Awareness Training, Password.

I haven’t read the Rapid7 report. In the mean time I hold by my earlier anecdotal article.

The Falacy of Security Awareness

Please prepare for a bit of a rant:

I’ve been in IT and Information Security for a long time.

When I started in the mid ’90s everyone said, “We need to educate the users”.

That mantra carried on through the years. Platforms changed. Computing grew more powerful. The Internet’s importance took hold and took off and reached beyond expectations.

Now we have cloud computing and *-as-a-Service and Bring Your Own Device (BYOD) and social media and … and … and …

I still hear security professionals say, “We need to educate the users”. And I sigh, meaningfully.

My daughter’s high school presented a mandatory anti-bullying seminar. In the two hours they covered every aspect of why bullying other students was wrong and could lead to terrible consequences. They conveyed why bullying was bad for all students and did so in an emotional and meaningful way. Everyone applauded at the end.

Coming out of the seminar my daughter heard a group of students approach one kid, surrounding him. “If you think that meant anything, you’re wrong”, they said.

The tormenting kids were eventually caught and punished, but the point here is that they went through “user education” and came out the other side more resolute to do the opposite.

Security education runs that risk plus over saturation plus resentment plus general ineffectiveness. Making things worse, many such programs I’ve seen treat employees like children unable to understand what is work related and what is personal.

My idea of user education is:

  • Present an Acceptable Use Policy, Employee Privacy Policy, and related materials to new employees on day one for signature
  • Every year as part of performance review, have each employee sign the latest version of the policies with changes highlighted and explained
  • Employees are encouraged to raise concerns at any time to HR or IT or IT Security or Legal or their manager without penalty

It’s also important that the company not hide behind security for unpopular internal measures. The best example of this I can think of is when a company’s legal department requires and enforces email retention policies. Too often the blame falls on IT. The business and legal need to step up. Such actions degrade IT’s relationship and authority with the user community. Occasionally legal and IT security risks align, but email retention is an example of direct conflict.

What are your thoughts? How is this handled in your organization, for good or for improvement?

Oracle Java fails at security in new and creative ways | Naked Security

Oracle Java, easily the most attacked and successfully exploited browser plugin, is on my radar again after finding new ways to fail at security.

The first sign of trouble recently was posted on Jerry Jongerius’s site, Duckware. He described the embarrassingly broken code signing implementation in the Java Runtime Environment (JRE).

The purpose of code signing is to cryptographically ensure that you can identify who created a program and that it hasn’t been tampered with by any third parties.

For example, Oracle offers a test applet (applets are Java programs that run in your browser) to determine whether your version of Java is update to date.

When you download the applet with Java, you are prompted to run the applet with a warning that Java applets can be dangerous, the name of the applet, the publisher and the URL serving it to you.

via Oracle Java fails at security in new and creative ways | Naked Security.

ISC Diary | A Random Diary

The current discussion about breaking encryption algorithm has one common thread: random number generators. No matter the encryption algorithm, if your encryption keys are not random, the algorithm can be brute forced much easier then theoretically predicted based on the strength of the algorithm. All encryption algorithms depend on good random keys and generating good random numbers has long been a problem.

In Unix systems for example, you will have two random devices: /dev/random and /dev/urandom. “random” usually produces random numbers based on some source of entropy. In Linux, parameters like mouse movements, disk activity and interrupts are used. Older versions of the random number generator used network activity, but since the attacker may be able to affect network activity, this parameter is no longer used. The Linux random number generator was found to be not particularly well implemented, in particular on disk less systems and systems with little user activity, like for example routers [1] .

Recently, some implementations of Linux like OpenWRT where found vulnerable if they are used on MIPS based hardware. The random number generator on these systems uses the number of CPU cycles since reboot as a seed. However, the respective function always returns 0, not the actual number of cycles on MIPS. [2]

Are there better ways to collect random numbers? One of the challenges is to increase the amount of entropy (random events) collected. There are some good attempts to use microphones, cameras and other hard ware devices to improve the pool of entropy. Sadly, there are no simple “standardized” solutions to implement these techniques.

via ISC Diary | A Random Diary.

Fortinet Blog | News and Threat Research Security 101: Securing SCADA Environments

Much of what’s here I advocated in my previous professional life:

A SCADA environment (Supervisory Control and Data Acquisition) is unlike a conventional IT network in that it provides interconnectedness between industrial systems such as robots, valves, thermal or chemical sensors, command and control systems and HMI (Human Machine Interface) systems, rather than desktops. These environments monitor, manage and administer critical infrastructures in various fields such as transport, nuclear, electricity, gas, water, etc.

Historically, these SCADA control systems have used a dedicated set of communication protocols but as technology and industrial architectures have evolved, these same industrial systems are all interconnected via a conventional IP network. The problem of course is not the use of the conventional IP but rather potentially vulnerable environments such as an unpatched Windows operating system on an HMI platform. Reducing down time is sometimes justification enough to postpone patching on these systems, making SCADA environments potential targets for cybercriminals.

via Fortinet Blog | News and Threat Research Security 101: Securing SCADA Environments.

Read on for their recommendations.

Recommendations for strengthening cyber security policies

The report provides the outlines of two tools, a suggested Review Process and proposed Development Framework to help boards, senior managers and information teams in organisations that would like to review their information security strategies and governance arrangements.

Since its launch in March this year, the DGSF actively engaged with civil servants, cyber specialists and technology providers to help guide the development of the Forum and to assist in quality assuring the work produced through the initiative. The report identifies four high priority areas, for government to address as it continues to make greater use of technology to meet austerity targets and improve the delivery of digital public services:

via Recommendations for strengthening cyber security policies.

You’ll have to click-through to read the recommendations, but any seasoned InfoSec professional can guess what they are.