A passenger sneaked a firearm through airport security in Atlanta earlier this month before flying with it to Tokyo. This has attracted a lot of media attention, with CNN, Time, CBS, The Hill, The Washington Post, and others publishing write-ups of the incident.
So is the shutdown making airports less safe? Was it the stalemate in Washington, D.C., that allowed someone to slip a gun past TSA screeners?
The short answer: probably not. The story about the firearm appears to have been first reported by WSB-TV, an ABC affiliate based in the Atlanta area. On January 2, a man boarded his Delta flight to Japan with a firearm. Once he landed, he informed Delta workers that he had a gun. Delta in turn informed the TSA, who said in a statement that “standard procedures were not followed.”
The TSA insists the shutdown had nothing to do with the incident. “The perception that this might have occurred as a result of the partial government shutdown would be false,” the agency said in a statement to the press. “In fact, the national callout percentages were exactly the same for Wed, 1/2/19 and Wed, 1/3/18 (when there was no shutdown)–5%,” an agency spokesperson added in an email to Reason.
In other words, this wasn’t the shutdown; it was just normal TSA incompetence.
Sounds plausible to me. The TSA has a pretty bad track record when it comes to identifying items that could actually pose a threat. A 2015 Department of Homeland Security (DHS) investigation, for instance, revealed that in 67 out of 70 cases, undercover investigators succeeded in smuggling weapons or explosives through security.
Paying attention to cybersecurity is more important than ever in 2019. But, some companies are still unwilling to devote the necessary resources to securing their infrastructures against cyberattacks, and naive individuals think they’re immune to the tactics of cybercriminals, too.
For people who still need some convincing that cybersecurity is an essential point of focus, here are six reasons.
1. The Average Cost of a Cyberattack Exceeds $1 Million
It’s no surprise that cyberattacks are costly, but some people will likely be shocked at the massive expenses that could result. According to a recent report from Radware, the total costs are more than $1 million. Additionally, victims report issues not directly related to financial losses, such as decreases in productivity or negative customer experiences.
Based on the above statistic, enterprises should conclude that although it costs money to invest in cybersecurity strategies, the expenses could be more substantial if organizations choose not to put enough of their resources toward experts and tools that minimize threats.
2. The U.S. Government Says It’s Time to Come Up With a Better Plan
The U.S. government, as well as the authorities from other nations, continually struggle to safeguard against digital attacks from rivals. The challenges are so immense that government bodies and officials warn that the United States needs an improved way to stop adversaries.
A State Department report warned that the country is increasingly dependent on networked information systems, and foes from other nations have learned to exploit that dependence and use it to disrupt the lives of Americans.
Most people who live in the U.S. can at least imagine the consequences of a severe cyber attack that affected the country’s ability to proceed with normal operations. Since government authorities researched the possibility and asserted there’s no time to waste in coming up with an improved approach to cybersecurity, that’s all the more reason to take action this year.
3. The Methods of Attack Are Diversifying
A decade or so ago, people typically felt sufficiently secure online by installing anti-virus software on their computers. That’s still a worthy precaution to take, but it’s no longer adequate for preventing all or even most of the attacks a hacker might try.
According to a 2014 report, cybercriminals orchestrated 75 percent of attacks through publicly known software vulnerabilities. But, they also try to gain people’s credentials through phishing attacks, lock down their systems with ransomware or infiltrate poorly secured connected devices to name but a few possibilities.
People have a growing number of ways to use technology and rely on connected devices, but that also means the likelihood goes up for potentially unfamiliar kinds of attacks. Focusing on cybersecurity this year requires, in part, understanding the most recent and common types of threats and protecting networks against them.
4. Recent Breaches Victimized Millions
Equifax and Starwood/Marriott dealt with breaches that compromised the data of well over 100 million victims. The earlier revelation about the financial costs of cyber attacks is damning in itself, but it’s crucial for brands — and consumers themselves — to recognize that data breaches can be unintentional or malicious, but in any case, they could affect millions of people.
Then, affected companies have to engage in damage control in an attempt to restore lost trust. Even when those entities put forth the effort, they may still find that customers behave differently following breaches.
More specifically, an April 2018 study examined the connection between consumer trust and spending. It involved respondents giving a trust score to businesses. The survey revealed that 15 percent of low-trust customers decreased how much they spent at companies. But, in cases of high instances of trust, the decrease in consumer spending was only 4 percent.
5. It Takes Months to Identify and Contain Breaches
If a person or business has a significant water leak in a well-used area, the problem is usually easy to spot and fix. But, it’s typically not so straightforward with cyber-related issues. Research from 2018 published by IBM found that, on average, it takes 197 days to identify a breach and 69 days to contain it. Those timeframes give hackers plenty of time to do damage that may prove irreparable. Then, once headlines indicate how long a breach remained unnoticed, the reputational damage could be severely harmful, too.
Making cybersecurity a focal point this year could minimize the time spent looking for areas of concern within a network, especially if using artificial intelligence-based strategies that learn normal conditions and give warnings about deviations.
6. Cybercrime Is Extremely Profitable
Some criminals alter their methods once it becomes apparent that their current wrongdoings are no longer profitable. But, that probably won’t happen for a while concerning online-based crimes. Research from a criminology expert published in April 2018 highlighted how the worldwide revenues from cybercrime are at least 1.5 trillion annually.
The investigation talked about how cybercrime represents an interconnected web of profit possibilities with blurred lines between legal and illegal activities. If people don’t fight back against online criminals at both personal and organizational levels, hackers will have more opportunities than ever to continue making income while others suffer.
Failing to Focus on Cybercrime This Year Could Cause an Assortment of Issues
This list highlights some of the most prominent reasons why it’s essential to make cybersecurity a priority in 2019. Hackers get progressively more skilled at carrying out attacks, and they can cause significant catastrophes on unprotected or poorly defended
About the author
Kayla Matthews is a technology and cybersecurity writer, and the owner of ProductivityBytes.com. To learn more about Kayla and her re (Security Affairs – 2019 Cybersecurity predictions, cyberattacks)
Conventional wisdom in IT security has long taught us that zero-day exploits are rare and that we need to be far more concerned with non-zero-days, which make up the vast majority of attacks.
This was true. In my experience, few if any security professionals still say this in this way. It makes the wrong statement.
This paradigm was challenged recently by Microsoft security researcher Matt Miller in an awesome presentation he did on the evolution of Microsoft Windows exploits and defenses for Microsoft’s last Blue Hat event on February 7.
Prior to seeing Miller’s presentation, I would have guessed that zero-days were still rare.
Don’t guess. Ask a professional.
The new data that Miller had collected declared that zero-days are actually the norm, and non-zero days are getting less common over time. He showed that in 2017, every actively exploited Microsoft vulnerability was first done using a zero-day attack. In 2012, that number was 52 percent and had been as low as 21 percent in 2008.
As it should. The threat landscape is wide. The number of platforms, packages, and programs in an environment continues to grow.
Needless to say, his findings have generated lots of discussion. If misunderstood, a reader might be forgiven for wondering how important a role patching plays if the vast majority of exploits have no patch. Here’s an excellent example of why you don’t want to take one data point to build a defense.
I have not seen any new discussion, at least among security veterans.
&because ‘breaking things’ isn’t the point of your work.
How about, “Move fast and make things better,”
“Move fast and create possibility”?
The reason we hesitate to move fast is that we’re worried about what that implies.
Move fast and learn something.
Move fast and take responsibility.
Move fast and then do it again because now you’re smarter.
The alternative is to move slow. To move slow and to hide.
Which means that those you sought to connect, to help and to offer something to will suffer as they wait.
Don’t hoard your work. Own it and share it.
I never liked the “… and break things” part of the slogan either. What happens when you move fast and it works?
I like the agile thinking there but there are plenty of use cases where moving slow is the more appropriate approach. Security, QA, and A/B testing are three examples where taking time to do it right is better than rushing out a half-baked solution.
There are use cases where moving backwards is needed in order to make a bigger move forward. For example, killing off a moderately successful product to launch something remarkable.
In general, don’t invest too much capital in “fortune cookie” wisdom without some critical thinking.
A new variant of the Ursnif trojan has been discovered targeting Japan since the beginning of 2019. Japan is a common target for Ursnif, but the latest version, delivered by Bebloh, goes to increased lengths to ensure that the victim is indeed Japanese.
New variants of Ursnif are not uncommon since the source code was leaked in 2015, but this version also includes enhanced data theft modules for stealing data from mail clients and email credentials stored in browsers. Other new developments, according to Cybereason research, include a new stealthy persistence module, a cryptocurrency and disk encryption module, and an anti-PhishWall (a Japanese security product) module.
This version of Ursnif also adds some anti-security product capabilities aimed at defeating PhishWall and Rapport. PhishWall is a popular Japanese anti-phishing and anti-banking trojan application, and anti-PhishWall modules have been used by other trojans in the past (such as Shifu and Bebloh).
The anti-Rapport module is designed to defeat IBM Trusteer’s Rapport product. This is not new, but not often seen in malware targeting Japan. The code seems to be based on — if not copy/pasted from — Carberp’s anti-Rapport code (which is freely available on GitHub). Cybereason notes that it has tested neither the PhishWall nor the Rapport module, so cannot attest to their efficiency.
Cybereason is unsurprised by the new concentration on data stealing highlighted by the new version of Ursnif. “With more and more banking customers shifting to mobile banking and the continuous hardening of financial systems,” writes the researcher, “it is not surprising that trojans are beginning to focus more than ever before on harvesting non-financial data that can also be monetized and exploited by the threat actors.”
But what stands out from this campaign, he adds, “is the great effort made by threat actors to target Japanese users. They use multiple checks to verify that the targeted users are Japanese, as opposed to other more prolific trojans and information stealers that cast a wider net when it comes to their victims.”
Data Must be Protected as it Exists at All Points in the Processing Lifecycle
Data is often an organization’s largest and most valuable asset, making it a prime target for all types of adversaries both criminal and nation-state.
Data isn’t merely “often” the most valuable asset. There was a time when physical, analog, off-line assets were the most valuable. That time is, in technology terms, long past.
Nearly every week a new data breach is announced, serving as a consistent reminder that data security matters. In the first half of 2018 alone, 944 breaches led to 3.3 billion data records being compromised. But what does true data security look like? Numerous solutions herald the necessity of and their ability to provide ‘end-to-end protection’ but when we break through the buzzwords, do we have a clear picture of what it means to secure data?
With attack vectors emerging from every possible angle and attackers becoming increasingly sophisticated, it has become clear that every part of data security matters — from secure data storage, transit, and processing to access control and effective key management. If one aspect is vulnerable, it undermines the effectiveness of the other security measures that have been put in place.
Sadly, “break through the buzzwords” does not apply to the next paragraph.
This multi-dimensional risk requires a holistic, data-centric approach to security, one focused on protecting the data itself at all points in its lifecycle rather than concentrating efforts only on its perimeter of surrounding networks, applications, or servers.
Breaking one link in a chain is not “multi-dimensional risk”. I agree with the idea (if not the language) of the rest of the paragraph.
Organizations must ensure data is secured at all times by:
1. Securing Data at Rest on the file system, database, or storage technology
2. Securing Data in Transit as it moves through the network
3. Securing Data in Use, while the data is being used or processed
There’s #4 – Data in Limbo, where it resides in system memory moving between the above states. Meltdown and Spectre demonstrated the vulnerability.
Together, these elements form the Data Security Triad, representing the trifecta of protection required to ensure data is secure throughout its entire lifecycle.
I argue data security is not a triad. It was a useful model once but is overly simplistic at best. How do cloud-based and as-a-Service fit in this triad? How do CDNs? What about all the pieces that have been outsourced to third parties?
At the core of this protection strategy is encryption. Encryption renders data useless to an attacker, making it unreadable and therefore removing its value. Thus, encryption is able to undermine the attackers’ purpose – stealing assets of value – and makes the target infinitely less appealing.
Encryption is important, but the integrity of the data (preventing the injection of false data or casting doubt on its validity) and assurance of delivery (preventing the rerouting, delay, or inability to reach the intended destination) are just as important.
Also, just because something is encrypted doesn’t mean it is encrypted well or will be in the future.
Experience tells us that if there is data of value at stake, attackers will find a way to find and reach it – we can’t just lock the front door; every point of entry needs to be protected. Consequently, limiting encryption to only a portion of the Data Security Triad is a dangerous oversight.
Only discussing data in terms of encryption is also a dangerous oversight. In addition to integrity and assurance, access is important – admins, users, processes, applications, &c., privacy is important, and availability is important.
We know attackers are evolving and our security practices must evolve as well. Protection schemes must recognize and secure data as it exists at all points in the processing lifecycle, whether at rest, in transit, or in use.
I appreciate the intent of this piece. Encryption, if implemented well, is a valuable tool. However, encryption isn’t a magic bullet for security. And encryption can give a false sense of security if done in isolation or done poorly.
Most decisions – say 90% – we make in our lives are reversible.
As a general principle for these reversible decisions, I’ve found it helpful to prioritize speed of decision making over accuracy.
This sounds crazy at first – why wouldn’t we try to get decisions right?
It turns out there’s a huge cost in waiting for all the information to appear. So, if we prioritize making the decision quickly instead, we can also go back and change the decision if we see data that tells us otherwise.
Over the long run, two things happen. First, quick experimentation beats deliberation.
And, second, with more repetition, we begin to develop a better gut and nose for the right direction. At that point, decision making speed morphs into decision making velocity (velocity = speed + direction – in this case, a direction that is in the ball park).
Decision making velocity, in turn, leads us to good judgement.
The most common type of delegation actually isn’t delegation at all. Mike calls it “Deciding”. This is what happens when you hire someone to help you with a task or a job, but you don’t ever train or empower them to make any decisions on their own. …
How does this differ from actual delegation?
Assign an Outcome
Actual delegation happens when you assign a task to someone while also empowering them to make any decisions related to completing that task.
Put another way, you are delegating the outcome.
When you can delegate the outcome, it is liberating to everyone involved. Your team member feels trusted and empowered to do their job without you micromanaging them. And you are free to focus on the things that you need to do.
Reward Ownership (Rather Than Quality)
One other thing related to delegating that stood out to me was the importance of rewarding a team-member’s ownership of a task and not the quality of the outcome of that task.
You must allow them to make mistakes, or do things differently. Because they will.
If you only ever reward them when they do things just perfectly the exact same way that you would have done it, then all you’re doing is training them to ask you for a decision at every juncture.
So, instead, celebrate their ability to think and work with autonomy while giving candid and helpful feedback to help them make better decisions in the future.
As Mike writes, it all boils down to letting go of perfectionism.
What are the constraints – money, resources, legal/regulatory, &t.?
By when does it need to be done?
And it’s important to be available to provide coaching along the way when asked for or needed. My preferred approach is to ask questions back as, unless the issue is particular, the person to whom you delegated probably already knows the answer. As Sean Blanc said above, “You must allow them to make mistakes, or do things differently. Because they will”.
I’ve largely stopped writing about the latest study or industry analysis white paper. Rarely to they shed much new light on security. This is an exception. The statistics in the article are jaw-dropping if close to accurate. But this is the part that is scary:
As a result of these factors, the pressure is reaching boiling point for many.
Over a quarter (27%) of CISOs polled said stress is impacting their mental or physical health, while 23% said the role is damaging their personal relationships. Even worse, 17% admitted they had turned to medication or alcohol to deal with workplace stress.
Mental, emotional, and physical health all can take their toll on well-being. But that 17% number is just as telling – what happens during an event when the head of the organization is blotto?
“It’s no surprise that CISOs are facing burnout. Many lack support from within their organizations, and senior business leaders need to face the facts: the threats are real, and CISOs need to be given the resources and support to tackle them. If not, the board must face the consequences.”
The lack of support feeds into the cycle. Even if the CISO does have a health or substance problem there may not be the mechanisms in place to manage a response in lieu of top leadership. I wonder how may DR/BC/IR tabletop exercises cover absent or impaired leadership?
On Saturday March 18 1967, around half past six in the morning, the first officer of the Torrey Canyon realised that his vessel was in the wrong place. The 300-metre ship was hurrying north past the Scilly Isles, 22 miles off the tip of Cornwall in the south west of England, with more than 119,000 tonnes of crude oil. The aim was to pass west of the islands, but the ship was further east than expected.
The officer changed course, but when the sleep-deprived captain Pastrengo Rugiati, was awoken, he countermanded the order. A two-hour detour might mean days of waiting for the right tides, so Capt Rugiati decided instead to carry on through the treacherous channel between the Scilly Isles and the mainland.
Most serious accidents have multiple causes. A series of mistakes or pieces of bad luck line up to allow disaster. The Torrey Canyon was hampered by an unforgiving schedule, barely adequate charts, unhelpful winds and currents, confusion over the autopilot, and the unexpected appearance of fishing boats in the intended course. But reading Richard Petrow’s contemporary account of the Torrey Canyon disaster, a clear lesson is that Capt Rugiati was too slow to adjust. He had a plan, and saw far too late that the plan was doomed to failure — and with it, his ship.
Some accident investigators call this “plan continuation bias”. Airline pilots sometimes call it “get-there-itis”. The goal appears within touching distance; it’s now or never. Tunnel vision sets in. The idea of a pause or a change of approach becomes not just aggravating, expensive or embarrassing — it becomes literally unthinkable.
In such circumstances aeroplanes have crashed after trying to land in bad weather because the destination airport was so temptingly close. Patients have died of oxygen starvation because doctors and nurses fixated on clearing blocked airways rather than checking whether an oxygen pump was working. And the Torrey Canyon ran aground, producing the world’s first major oil tanker disaster.
We’ve all experienced “get-there-itis”. For me, it tends to emerge when dealing with family logistics. One child needs to go somewhere, another must be picked up from school. Then it turns out that someone needs to be at home to receive a delivery; the car is in for a service; the babysitter calls to cancel.
The plan seems feasible at first, but as complications mount, it starts to resemble an increasingly precarious assembly of stages and steps, lift-swaps and rendezvous, a Rube Goldberg fever-dream of an itinerary.
Read the whole article for more instances, as well as an appraisal around Brexit. My main takeaway is this:
If I’m lucky, someone finds the mental space to see clearly the fragility of it all. Someone suggests a cancellation or two, replacing the entire time-and-motion nightmare with something radically simpler.It’s that moment of clarity that is so often missing. Haste makes things worse …