I’m baffled as to why programmers put their trust in this advertising company to do the right thing, or why companies would stake their reputation on go. Several people tell me that Google handed over control to open source, but the main landing page for go, golang.com, the place were everyone needs to go to program in the language, says:
The copyright page, which a lot of folks point to, actually says:
Except as noted, the contents of this site are licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license.
… which means Google can exempt whatever it wants from the CC & BSD licenses. A good legal argument could be made about the BSD license for the code as the commas make things more open to interpretation. The term “code” could include HTML and other markup. But IANAL
Back to my main point, Google’s reputation is not good based on their behavior. I would not want to stake my company or my coding on them.
(Picture via Roman Synkevych (@synkevych) on Unsplash)
AI can now easily (8 seconds) change the identity of someone in a film or video.
Multiple services can now scan a few hours of someone’s voice and then fake any sentence in that person’s voice. […]
Don’t buy anything from anyone who calls you on the phone. Careful with your prescriptions. Don’t believe a video or a photo and especially a review. Luxury goods probably aren’t. That fish might not even be what it says it is.
But we need reputation. The people who are sowing the seeds of distrust almost certainly don’t have your best interests in mind-we’ve all been hacked. Which means that a reshuffling is imminent, one that restores confidence so we can be sure we’re seeing what we think we’re seeing. But it’s not going to happen tomorrow, so now, more than ever, it seems like we have to assume we’re being conned.
Sad but true.
What happens after the commotion will be a retrenchment, a way to restore trust and connection, because we have trouble thriving without it.
(Via The end of reputation; photo via Raphael Lovaski on Unsplash)
Apologies to Seth for quoting nearly his whole post, but it’s important and scary.
Neal Stephenson, in his book Fall; Or, Dodge in Hell 🇺🇸 🇯🇵, addresses this very issue of reputation and authenticity. In very simplistic & basic terms, it involves leveraging something like blockchain to “check in” or “sign in” to legitimate things by you or things you control. He also talks about Editors, who are human professional social media filters, which takes us down a different rabbit hole.
As I move my on-line life as much on to platforms I control or trust, I am thinking about how to validate “me” outside of that without that validation coming back to bite me later, assuming such a thing is possible.
What do you think?
The Department of Justice wants access to encrypted consumer devices, but promises not to infiltrate business products or affect critical infrastructure. Yet that’s not possible, because there is no longer any difference between those categories of devices. Consumer devices are critical infrastructure. They affect national security. And it would be foolish to weaken them, even at the request of law enforcement.
(Via The Myth of Consumer Security – Lawfare)
I love this article. Katie, as usual, is on point.
The vast majority of bugs found via bug bounty programs are cross-site scripting [XSS] bugs, a known class of bugs that are easy to detect, and easy to fix.
“Why would organised crime or nation-states pay for simple classes of bugs that they can find themselves? They’re not going to pay some random researcher to tell them about cross-site scripting bugs,” Moussouris said.
Amen! I want to hand out tee shirts with a snappier phrase to organizations.
“You should be finding those bugs easily yourselves too.”
Moussouris is a huge supporter of bug bounties, having run both the Hack the Pentagon and Hack the Army programs for the US military. But she says that relying on a public bug bounty program just creates the “appearance of diligence”.
“This is not appropriate risk management. This is not getting better when it comes to security vulnerability management,” she said.
Moussouris told the story of one security researcher who’d made $119,000 within four hours in a bug bounty program. That’s more than $29,000 per hour to find simple bugs in a known class.
“That’s a great ROI [return on investment] for that researcher. It’s a terrifying ROI for the organisation that paid him,” she said. …
Simple bugs can be found way, way more cheaply.
Bug bounties are a tool, but only one tool. And it’s a game, so people will look to take advantage.
Then there’s the eternal problem of basic cyber hygiene. Moussouris says we “struggle as an industry” to deal with the last-kilometre problem of actually applying the patches.
“A lot of the patterns [have] not actually shifted that much from where we were when I started out professionally 20 years ago as a penetration tester,” she said.
“We’ve created a $170 billion industry, which, we’re really good at a few things, security not exactly being one of them. Marketing, definitely.”
(Via Relying on bug bounties ‘not appropriate risk management’: Katie Moussouris; picture Via Caleb Bryan on Unsplash)
But I don’t know what the right way is:
Networking and web security giant Cloudflare says the recent 8chan controversy may be an ongoing “risk factor” for its business on the back of its upcoming initial public offering. […]
8chan became the second customer to have its service cut off by Cloudflare in the aftermath of the attacks. The first and other time Cloudflare booted one of its customers was neo-Nazi website The Daily Stormer in 2017, after it claimed the networking giant was secretly supportive of the website.
Cloudflare, which provides web security and denial-of-service protection for websites, recognizes those customer cut-offs as a risk factor for investors buying shares in the company’s common stock. […]
Cloudflare had long taken a stance of not policing who it provides service to, citing freedom of speech. In a 2015 interview with ZDNet, chief executive Matthew Prince said he didn’t ever want to be in a position where he was making “moral judgments on what’s good and bad,” and would instead defer to the courts. […]
Cloudflare has also come under fire in recent months for allegedly supplying web protection services to sites that promote and support terrorism, including al-Shabaab and the Taliban, both of which are covered under U.S. Treasury sanctions.
In response, the company said it tries “to be neutral,” but wouldn’t comment specifically on the matter.
(Via Cloudflare says cutting off customers like 8chan is an IPO ‘risk factor’ by Zack Whittaker)
I am mixed on this takedown even after a solid week of reflection. There is no doubt in my mind that 8chan is toxic and a blight on humanity, and the same with The Daily Stormer. I’m happy they aren’t being protected by CloudFlare.
However, I don’t like that:
– there’s no oversight into these takedowns;
– CloudFlare acted from public pressure (well, social media pressure), and the mob is oft not rational (or real);
– and, from a technical perspective, these instances offer a problematic precedence.
CloudFlare and similar companies are almost as core to the Internet’s function these days as DNS and NTP. My site is protected by CloudFlare, in fact (and in full disclosure, etc.). They and their competitors are akin to an active Internet insurance policy. Denying core-adjacent functionality and insurance to sites because the sites are reprehensible in giving the dregs of society a place to congregationally amplify their hate seems a no-brainer on first blush. But it also helps lay out a blueprint for blocking sites for far less.
Sadly, I don’t know how I would feel better about or how I would solve this. I’m open to constructive discussion.
To be clear: I agree wth CloudFlare on their actions in regard to 8chan and The Daily Stormer. I would like the actions to get codified into a documented policy.
Today I learned that ZIP Codes do not strictly represent geographic areas but rather “address groups or delivery routes”.
> Despite the geographic derivation of most ZIP Codes, the codes themselves do not represent geographic regions; in general, they correspond to address groups or delivery routes. As a consequence, ZIP Code “areas” can overlap, be subsets of each other, or be artificial constructs with no geographic area (such as 095 for mail to the Navy, which is not geographically fixed). In similar fashion, in areas without regular postal routes (rural route areas) or no mail delivery (undeveloped areas), ZIP Codes are not assigned or are based on sparse delivery routes, and hence the boundary between ZIP Code areas is undefined. […]
ZIP Codes are therefore not that reliable when doing geospatial analysis of data:
> Even though there are different place associations that probably mean more to you as an individual, such as a neighborhood, street, or the block you live on, the zip code is, in many organizations, the geographic unit of choice. It is used to make major decisions for marketing, opening or closing stores, providing services, and making decisions that can have a massive financial impact.
> The problem is that zip codes are not a good representation of real human behavior, and when used in data analysis, often mask real, underlying insights, and may ultimately lead to bad outcomes. To understand why this is, we first need to understand a little more about the zip code itself.
(Via The Bear with Its Own ZIP Code by Jason Kottke)
I ran into this a decade ago, plus or minus a few years, when trying to come up with a good mechanism for imprecisely denoting the general location of network equipment. One idea had been to use the ZIP code or outside-of-the-US postal equivalent in device naming or SNMP strings. I determined ZIP codes and their analog are useful in postal delivery but are not good for asset and information management. I think there was a substantial cost for a database with every possible Earthly postal code, which would have been a massive overkill for the need.
It’s worth checking if the rules we hold dear, and fast to are helping the people we serve.
- Why does this rule exist?
- Does this rule benefit the majority of our customers? How?
- Do we have difficulty looking people in the eye when we explain this rule? Why?
- What story does this rule tell the customer [and ourselves] about our values?
- How does this rule make us better?
- What would happen if we scrapped this rule?
Doing what’s accepted or expected isn’t necessarily the right thing to do.
(Via The Story of Telling)
I like this.
This is a good post from Om Malik where he talks about what makes an expert:
Just because someone labels you as an “expert” doesn’t mean you are one. People get a lot of credit these days for stumbling onto things that may very well have happened had they been standing there or not. In addition to luck and talent, it takes time to become actually good or great at something. It’s not so much the 10,000-hour theory that is popular these days, but rather it’s about learning the lessons that only time can reveal.
Most ‘experts’ are fake. If you call yourself an expert, you are certainly lying to everyone and yourself. True experts are hard to find, because they are focused on their craft so intensely, that you rarely know who they are. At least that has been my experience.
(Via The Brooks Review Member Feed)
This struck me after having read Brian Krebs’ article about Marcus Hutchins, the guy who was responsible for both stopping the spread of the global WannaCry ransomware outbreak in 2017 and spreading the “Kronos” banking trojan in his younger days. Krebs describes Hutchins as an “accidental hero”, a “security enthusiast”, and a “security expert”. The middle one is probably the most correct of the bunch, but “security professional” is best.
The hero descriptor is perhaps more egregious than an expert label. We, in general, throw hero around far too liberally. In the WannaCry case, Hutchins was not unique in his discovery. He was first. Hutchins did not display exceptional courage, nobility, or strength when he registered the domain for DNS sinkhole-ing the malware. He did spend money and time, and he benefitted a lot of people, organizations, and companies through his swift action.
I value Krebs’ reporting and the risks he takes when writing some of his pieces, but I did not care for this. Let’s temper descriptors, shall we?
The FTC Pleads With Claimants to Accept Credit Monitoring After Pitiful Equifax Money Pot Empties — Pixel Envy:
Consumers could never be fully compensated for the impact of this breach, but announcing this as a settlement of over $575 million with $300 million going towards credit monitoring services is misleading at best. Equifax also did not have to admit culpability, and the CEO responsible retired with a compensation package with a minimum $18 million value — more than half this $31 million pot that could be split between 147 million affected consumers.
This settlement is infuriating and insulting.
Couldn’t have said it better.