Dell’s datacenter vanishing trick: all is revealed
Watch them explain how they take something that consumes the same amount of electrical power as over 8,000 staff and just make it disappear
An extraordinary insight into how Dell did something which meant that their entire rationale for building themselves new data centers was suddenly gone. I hear you asking: “but is this a good thing for a company that equips their customers’ data centers?”. Watch the video and see what you think.
Here’s an outline of what was discussed at the beginning of the session:
They start off with the claim that data centers are an increasing contributor to climate change.
Dell did a survey about 2 years ago and discovered that 65% of their customers were considering additional data center space and that Dell were in the same situation, in that they had also reached the limits of their existing capacity.
In response to this discovery, Dell decided (as a short-term, interim measure, rather than to immediately make a strategic decision to increase their in-house data center provision) to ‘go colo’ (to co-locate their equipment) by renting the additional space they required from third party data centers, something which gave them a year to consider what to do about future data center capacity.
The key technology which Dell used to address the longer term requirement was virtualisation, which they went on to explain in simple terms.
If servers are likened to engines, conventional server usage involves using one engine per programme or application – virtualisation gives them the ability to put many programmes on one engine.
For much of the time, servers might only be using 5% of their processing capacity and yet still need 100% electrical power.
“You’ve just immediately halved the power demand that I used to have and we can create white space in the date centre, we don’t have to build another, maybe we’ll wait a year.
Well, a year and a half later, now it looks like we can wait ten years or fifteen years, maybe indefinitely”
Dane Parker Global Facilities Lead at Dell
The average server utilisation is between 12 and 18 percent of its processing capacity.
Through the use of virtualisation and compression for the data center, Dells server usage rate has been driven to 42% and is still climbing.
Dell say that they’ve doubled their servers’ workload with no extra power and no extra servers.
As a result, Dell decided not to build any more data centers.
A decent sized data center would cost over a hundred million dollars, much of which would be for providing power: of the company’s total global utility bill, around 40% goes to IT.
In central Texas, Dell has 17,000 employees and two data centers, and the data centers account for half of the total power consumption. Based on that, Dell decided that they needed to get more capacity from existing data centers in order to become more efficient.
Dell introduced an initiative to double or triple the utilisation of their existing data center resources. This has halved the previous power demand and created empty space in the data center, meaning that they don’t have to build another one for at least a year.
18 months on, it looks as if they can wait 10 to 15 years, or maybe indefinitely, before having to build another data center.
The video is of a session at the Fortune Brainstorm Green 2010 Virtual Conference called:
The speakers at this session (held in April 2010 in California) are:
Dane Parker Global Facilities Lead, Dell Inc.
The Moderator is: