The new reality, in a post “Safe Haven” world and more specifically a world where politicians & police clamor for back doors, consists of companies moving data and data centers into certain jurisdictions.

I find the data escrow concept an interesting development:

Microsoft officials previously said that they will be operating in the second half of 2016 two new German datacenters, located in Magdeburg and Frankfurt. These datacenters, which will offer users Azure, Office 365 and Dynamics CRM Online, will offer users the option to have their data-access controlled by a trusted third party, not Microsoft. Officials said that access to customer data stored in these new datacenters would be under the control of T-Systems, a Deutsche Telekom subsidiary, that would act as a data trustee.

Source: Microsoft details more on its German datacenter data-access lockdown plan | ZDNet

Let me know your thoughts in the comments!

 

Given how much data the scientists at CERN have to crunch through, it’s not surprising that it take its computing power seriously. This video takes a look inside the massive computer center that allows the magic to happen.

In what is essentially the brain of the Large Hadron Collider it is noisy, hot—and incredibly powerful. Sit back and lust over the tech on show.

via Inside CERN’s Massive Computer Center.

General Motors Co.’s new data center in Warren has received a unique environmental award for a facility of its kind.

The Detroit-based automaker Friday announced its Enterprise Data Center on its Warren Technical Center campus earned Gold certification by the U.S. Green Building Council’s LEED, or Leadership in Energy and Environmental Design, program.

Fewer than 5 percent of data centers in the U.S. achieve LEED certification, according to the building council. GM’s data hub on its Technical Center campus in the Detroit suburb is the company’s fifth LEED-certified facility and second brownfield project.

via GM’s Warren Enterprise Data Center achieves Gold certification from US Green Building Council | MLive.com.

Good on you, GM. It’s a nice benefit of their in-sourcing moves.

87 percent of IT professionals currently leveraging private cloud solutions indicate that their companies host clouds on-premises rather than with third-party providers, according to Metacloud. Reduced cost (38 percent) topped security (34 percent) as the reason respondents gave for deploying a private cloud.

via Most companies choose on-premise private cloud deployments.

The buzz marketing of public cloud continues at a brisk pace, at least in my anecdotal experience. I find the cost driver the surprise in this report, especially that ranked higher than security. What I’m not surprised about is that people are starting to realizing that there are hidden costs behind the public cloud. I didn’t know that realization had progressed so far, at least according to this one report.

Your mileage may vary.

There’s a great post by Rob VandenBrink over at the ISC Handler’s Diary about embedded devices that are hiding in plain sight in your data center.

I was recently in a client engagement where we had to rebuild / redeploy some ESXi 4.x servers as ESXi 5.1. This was a simple task, and quickly done (thanks VMware!), but before we were finished I realized that we had missed a critical part – the remote managent [sic] port on the servers. These were iLO ports in this case, as the servers are HP’s, but they could just as easily have been DRAC / iDRAC (Dell), IMM or AMM (IBM) or BMC (Cisco, anything with a Tyan motherboard or lots of other vendors). These “remote management ports are in fact all embedded systems – Linux servers on a card, booting from flash and usually running a web application. This means that once you update them (via a flash process) they are “frozen in time” as far as Linux versions and patches go. In this case, these iLO cards hadn’t been touched in 3 years.

So from a security point of view, all the OS version upgrades and security patches from the last 3 years had NOT been applied to these embedded systems.

This is a thorny issue as systems often need downtime to patch these systems. Check out the thread there for how others are handing or mitigating this.

Oh, and I’ll throw in Sun’s LOM (Lights Out Management) to the list.

via ISC Diary | Silent Traitors – Embedded Devices in your Datacenter.

Matthew Wallace at AllThingsD wrote up a great article about how organizations employ a myriad of tactics to avoid the risks of shared storage environments, often inefficiently and ultimately self defeating:

Massive overprovisioning of resources in clouds, dedicated storage platforms attached to shared compute platforms, dedicated shelves in shared storage platforms, or massive horizontal scaling are options used every day. They don’t solve the problem — they avoid the problem, often at great expense or through significant architectural shifts.

My take away from this article is to ask the right questions of your cloud storage provider or your storage infrastructure vendor to make sure you’re not impacted by “Noisy Neighbors”:

For instance, does your CSP work with a storage vendor that offers guaranteed QoS on a storage platform? … Cloud environments empower you with the business agility of service on demand and flexibility to respond to changing business needs rapidly. Adding resources for a time and then giving them up when they are no longer needed is a major benefit. While the advancement of cloud computing has made those accessible on the compute side, the storage side was left behind by the limitations of rotational disks and the inability to offer ironclad QoS guarantees.

The power of a such a solution … is not only in knowing that you can guarantee a certain number of IOPS on each volume, but to pair that with cloud environments to allow the business agility to burst as needed on the storage array the way that cloud environments offer that flexibility for compute.

The rapid and automated provisioning world of the cloud demands that storage companies build APIs rich enough to control every aspect of an array. Building the user interface as a layer on top of the API is a demonstration of API and design maturity that shows a solution is future-proofed against demanding cloud orchestration requirements. Designing the solution to be linearly scalable without artificial breakpoints or step functions in performance keeps the provisioning and growth simple and reliable, shutting out the noisy neighbors once and for all.

via The Problem With Noisy Neighbors in the Cloud – Matthew Wallace – Voices – AllThingsD.