The old Dell tended to let the other guys spend their time and money building big new markets. Then Dell would jump in with its vaunted low-cost model and begin taking market share.

The new Dell is proving to be edgier. In some cases, it’s willing to go after fresh product areas before there’s a market at all, and it’s prepared to chase sales in the dozens rather than thousands of units, if it means keeping demanding customers happy.

Albert Esser, vice president of data center infrastructure at Dell, pulls out a server inside of Dell’s new data center that comes in two shipping containers. (Credit: Erich Schlegel for The New York Times)

As a case in point, Dell has entered the fledgling market for data centers packaged inside shipping containers with a unique, double-decker design that is code-named Humidor. The company showed off its data-center-in-a-box for the first time during my visit last week to its headquarters in Round Rock, Tex.

Sun Microsystems and Rackable Systems were the first large hardware makers to embrace the idea of taking all of the servers, storage systems, networking gear and cooling and power units that make up a data center and packing them into a shipping container. Their rivals largely ridiculed the idea a couple of years ago but have all come out with similar products since then.

These types of systems could appeal to companies that have lost their will to build big new data centers. Rather than paying for a massive, expensive building, a company can order a data-center-in-a-container and plant it in the parking lot. Just add power, water and a network connection, and off you go.

Along similar lines, organizations like the military that need lots of horsepower quickly and in unusual places might adopt the container approach.

To date, however, the containers have been slow sellers. Sun has mentioned a couple of customers, while Rackable has struggled to move the systems, shipping none last quarter.

So why would a company like Dell, which prides itself on using volume to lower costs, get into the container game?

That’s easy: Microsoft.

Microsoft has been the main advocate of containers, saying they will form the basis of its future data center designs. Some of Dell’s first containers will go to a new Microsoft data center near Chicago, according to Forrest Norrod, the vice president in charge of Dell’s Data Center Solutions business.

And Microsoft’s interest in the container idea should inspire others to take a look at the technology.

“I think next year will be the year for this,” Mr. Norrod said.

Whereas competitors have put all of the requisite technology components into a single container, Dell has gone with the double-decker idea. One container is full of server, storage and networking systems, while another container handles power and cooling. By using this design, Dell claims it can stick with standard equipment across the board, saving customers money and making it easier to upgrade the units.

Each set of containers holds about 1,300 servers and consumes about as much power as the homes making up a suburban subdivision. The cost can easily top $500,000.

The container notion takes some getting used to for customers accustomed to housing their computing gear in a shimmering new facility. But those traditional data center concepts are starting to give way to practicality.

“The general perception of data centers as these pristine environments has been broken down,” said Drew Schulke, a product manager in Dell’s Data Center Solutions group. “These approaches are getting more credibility, especially with capital markets being where they are.”

The container work comes out of an unusual group at Dell that customizes server and storage systems for large customers.

The Data Center Solutions unit started in mid-2006 when Mr. Norrod presented Michael Dell, the company’s founder and chief executive, with a plan to create a kind of start-up within Dell.

The large server makers had failed to come up with systems that were compact, cheap and power-efficient enough to meet the needs of customers like Microsoft and Google. So Dell set to work tailoring products for customers who would purchase about 5,000 to 10,000 servers a quarter.

“Michael was a very active sponsor, to put it mildly, about going after this,” Mr. Norrod said.

According to Dell, the Data Center Solutions business would be the fifth-largest server maker in the world if its revenue was broken out, placing it behind Hewlett-Packard, I.B.M., Dell, Sun Microsystems and Fujitsu.

Dell still doesn’t innovate, invent or provide anything other than cheap WinTel boxes. The article above is just another example. Read the details and their only customer is Microsoft who really is just throwing them a bone because its their latest software and storage solution. Dell is only as good or dominant as Intel and Microsoft, and Dell’s competitors (HP, Lenovo) have tightened up their supply chain models to match them on prices. All the while, they continue to have diminished thier brand with poor outsourced customer service and poor performing computers which used to be their hallmark. Good luck and god bless. Can’t wait until your our of business.

Dell may not innovate, but they have one of the best supply chains in the world. Second only to Wal-Mart. Anytime I am looking for a new computer, I have to compare the cost of building it myself and buying a Dell and Dell often wins. They also make great cases; haven’t found any OEM case that is easier to work with.

#2: PJ…
Server farms are designed with RAID failover and redundancy. If one drive fails, you just hot swap it out and put a new one in, that’s the way servers have been working for years. This approach is only changing from server rooms in buildings to portability. It makes a lot of sense, actually, since the servers, power and cooling systems are becoming so cheap they are commodities. The value is moving towards delivery, mobility (move the server to where the need is) and cost (the cost of a building to hold a server room is more than the servers themselves).

Completely agree with Paul, the “container full of machines” idea has been around for years, Dell really didn’t come up with anything new. But that’s been how they operate for years. I’ve been inside the Rackable demo container at Supercomputing this year and last, and Sun had theirs before Rackable did. HP also will make one for you if you wish.

PJ – these containers have the systems racked along the sides. When one malfunctions, you simply walk up to the server and replace it like you would anywhere else. They are regular servers, you could buy one of them, 40 of them or 1300 of them. For theft, you would need to be able to drive up with equipment capable of moving a shipping container if you wanted to take it with you, or you’d need to break into the container and take the machines with you, which isn’t really a quick thing to get out of the rack. Plus, the server might not have a fan fitted to it to provide cooling, instead relying on larger fans in the container itself to move air.

Wow, Real Cloud computing. I never heard of the concept of systems in a container before, but it is awesome. Dell may not have come up with the idea, but I feel they took the concept to a new level. With the power and cooling in a serperate container, a company could just replace the container with the server and sell the old one as needs change. plus if the cooling system container failed, you could swap it out for another, then fix, replace or upgrade the failing container.
I see this a great tech just add a comand center container.
Then you can sit back and control and monitor the operations in. Great military, oil, exploration, diaster reilef, mining, contruction sites, rock concerts and cloud computing. And don’t forget compaines like pixar that many need to add omputingpower during final rendering times for their films.
Just move the data center where you need it. one site to another. Expandable: get more containers when you need them.
Great if you live in a hurricane area you can move the data cener out of the area, with a little prep or short notice.

With the cloud computing idea, you will start seeing companines just leasing them to boost their computing power as needed. Just own a fleet of them and send them out as needed.
Shoot burning man with a computeized laser light show.

You speak about innovation may want to check your facts. Dell does supply standard based technologies but it’s funny when it comes to managed services,going green, and buisness models HP seems to be second every time Dell comes out with with a new idea. So feel free to continue with your reseller fantasy HP meanwhile I’ll stick with a company that is directly innovative where it counts and impacts my busisness.

Paul,
So, Dell doesn’t innovate. They give the customer what they want: in this case, a truckload of servers installed in no time, total turnkey solution, totally serviceable, efficient, and with the supply chain to keep them coming as needed. I don’t know if you’ve dealt with contracts before, but Microsoft doesn’t throw anyone a bone, especially on the multimillion dollar level.

# J : The idea of “Container full of machines” could have been around for a few years J and it need not be necessary to come up with completely new idea every time (Had it been that way, The Wright brother’s aeroplane would never have developed into A380 or B2) Its about improving the initial design to make it better, affordable in terms of maintanence, expandability and of course the dollar bills.

The double decker model, as it separates the cooling and power modules from that of the core server area, the modularity improves and thus much easier to handle in case of a failure. I don’t think, Dell ever stated that it is their invention which is new to the world, its but certainly a design improvement over the contemporary models.

These container-based server farms may be convenient, but they are certainly not hardened or even fail-safe. They can be made so, but not without significant investment.

Sure, it’s easy to pull out a bad server and slide in a new one. That’s the micro picture. Replace the CPU, and the game goes on.

But there are severe vulnerability issues with these containers.

1) You need multi-sourced and multi-wired network connections. Otherwise one back-hoe in the parking lot kills your connection. So you need a ton of backup datacomm equipment and instantaneous rollover to handle network issues. Without that, you have a nice shiny box filled with silver hardware that does nothing.

2) You need a big chunk of backup power for those inevitable brownouts or blackouts. Is this built into the container? Doubtful for environmental reasons. So now you are dependent on external backup power – again, a single point of failure when the diesel fuel runs out. You can buy and supply additional fuel and generators, but then you have the same type of synchronization and failover issues as with networking.

3) Lightning (and power surge) protection. I sure wouldn’t want to be in an all-steel container with a thousand servers when lightning strikes. Can you say “fried CPU”? I don’t care how many lighting rods and thick grounding cables they put on this thing, it’s still eventually going to have a lightning strike. How is it handled? How are the pieces protected? What about the poor operator schmuck inside?

So these container-based server farms are all cool and maybe will sell a lot of Dell machines, but they’re not a panacea for anything, and they bring along a raft of their own problems.

We’ve been surprised at the level of customer interest in datacenter containers. It’s mostly driven by consistent datacenter capacity constraints we hear from customers; they’ve hit limits on cooling, power and/or space, and can’t deploy additional IT. As Ashlee correctly noted, customers are more open to the idea than even we expected, and it’s directly proportional to the amount of ‘pain’ they have in their datacenter.

You’ve hit on some of the uses, and there are others; most customers so far are interested in pure datacenter capacity expansion, with the container becoming an extension of their DC footprint. Others are indeed looking at containers for disaster recovery sites, as depending on the design they have significant capacity in a relatively small footprint, and can be placed in locations where you might not choose to build a brick and mortar datacenter. And, of course, container farms can be a replacement for a brick and mortar site.

Customer requirements have so far centered around a few key features: First, ability to support multiple brands and types of IT in the container. Next, the same access to the IT that you have in the datacenter (per PJ’s question on replacing components). And finally, redundancy within the container infrastructure itself, so that simple failures don’t cripple the container.

In answer to some of the questions posted here: Security isn’t hard to address; most customers are planning to put their HP POD in some form of structure (warehouse, parking garage, etc.), although some will be placed outside. Once they are off the trailer, they can weigh up to 100,000 pounds fully loaded, so they have a certain amount of intrinsic theft resistance. And security ranging from cameras to biometrics etc. can be added.

Backup isn’t hard to address, it’s done the same way as for brick and mortar sites; standby generator, UPS, etc. It simply depends on what the customer wants.

Containers aren’t the solution to all datacenter problems, but they’re a good way to get fast, power-dense capacity, as needed. And so far, quite a few customers appear to need it…

I am interested in the container, but I am seriously wondering how DELL will handle “lightning strikes” when the unit is staged outside in a parking lot or behind another building. Good question Ted. Maybe a thick rubber padding around the container???

This is really not that earth shattering a concept. It’s simply a matter of scale and convenience. Large arrays of servers are so common at this point that bundling them into containers is just another form of modularization on a largerscale. And I don’t think these would necessarily be parked outside in open lots. They could be lined up inside virtually any warehouse. What you save on build-out would pay for security for the life of the installation several times over.

Sonny – Who cares about Dells plans and business models…you should have a core team that knows the industry your in and the technology ahead of time that develops your own business models. HP is for people who already know the path their business is on. Besides which, my experience has been that HP products have lasted longer. I don’t want HP telling me how to run my business or develop MY business model except in so far as the products abilities are. What I want are products that fit the model I’ve developed. HP does that(and to be fair I’m sure Dell does too, just a slightly lower quality). Besides how green is it really if I have to replace parts or suffer downtime (entropic efficiency?); I want that minimized/mitigated to the fullest extent..thats where the money comes in, uptime.

To # 8: They, and others like them, are simply giving what some customers want. If your the military, sure, you need something like that dropped at a moments notice anywhere in the world. If your in New Orleans, who cares? If the Gen goes down or ‘lightning strike’ occurs(which I’m sure the HP POD’s and others are grounded and protected against that)…your down, and now you’ve slowed the recoup time for the investment. Lets not get into security either. But I digress, my real rebuttal is about the ‘being thrown a bone'; part of that is, I’m sure, that you do not execute the same business or nowhere near the level of business transactions that Dell does. You also don’t have the market footprint Dell or HP has. They’ll throw Dell a bone, because through Dell, they get to you; which means the money comes back to them; especially in this market where no one wants to spend, it behooves them to throw Dell a bone.

14. I’m sure the container is grounded in someway.

# 12. Awesome, thanks for the quick response!
Why do I like HP?

Easy maintenance, and in the fight for space in our mini-center, there are at least 4 of 5 racks that are Proliants. Sure we’re not pushing the boundaries of space and time like other installations, but, when it comes down to it, we handle BILLIONS of dollars a year. I think that makes up for our physical size. Whats funny is the new Dell 2950 has crashed at least 5 times over the 4 months since its installation. While more event forensics are needed, so far, no real report as to whats causing the crash; honestly that could be a windows think(W2k3 server) Being that they are all on the same power source, etc…I find it funny that the HP’s are still going strong, but the Dell is the only one thats toasted itself a couple of times already, ‘out of the box’.

==============================================
Note: Cloud Computing? I’d get over it; it’s great for research and things of that nature, but sorry I’m not using the internet to run our real time apps with live sensitive data over the net. Just not gonna happen; and not with documents either. I don’t care how encrypted it is, encryption is only as good as the last new method and least happy employee. At least not in our life time; maybe if electricity becomes a free commodity; but still its just not secure enough and will never be secure enough for all the worlds companies to trust it. Its just an opinion so, of course, I may be wrong…

Rapid Deployment Data centers. I helped design and build the largest rapid deployment data ceneters that are in operation. They are in operation at Microsoft’s Visual Earth facility in Boulder Colorado.

We are in the process of a version 2 that has 15-23 petabytes onboard in a standard high cube container. Ours will outperform Dell, HP, Rackable, IBM, Sun Microsystems product. They are all a minimum one year behind our testing and deployment.

ALL of the larger financial institutions and ISP’s want them. The dramatic reduction of cost of infrastucture deployment combined with todays markets makes these containers a more than viable product.

Security is not a problem with these containers. Also, our version, has full remote control and monitoring of each blade, the climate control system, humidity controls, etc… that operates on a secure seperate industrial platform.

With power consumption optimized, cost of infrastructure tremendously reduced, timeframe for operational deployment tremendously reduced, I can’t see a good argument against the rapid deployment systems.

Although I’m happy to see Dell innovate, if this build-to-Microsoft product is really innovation, Dell has far bigger problems. I had perhaps the worst consumer customer service experience of my life with Dell in April, and this one long-term customer (small business as well as consumer) has vowed never to do business with them again.

Other posts have alluded to their poor customer service. The real innovation would be to return to these roots and once again try to at least give a satisfactory experience to their customers.

There’s nothing like reading the catcalling/moaning that goes on in practically EVERY discussion board. The endless ad-hominen attacks, biased posting and general pointless soapbox preaching has infected even the NYT.

Ashlee Vance wrote “Along similar lines, organizations like the military that need lots of horsepower quickly and in unusual places might adopt the container approach.”

More than forty years ago the US Army contracted with URS Corporation to develop systems that were mobile, including the TACFIRE and Combat Services Support System (CS3). They ran on IBM mainframe computers, with mobility provided by deploying the IBM computers in vans.

The Army report linked below says
“In the latter half of 1967, TOS equipment, built into 21 trucks, vans and trailers, were sent to the Seventh Army headquarters in Heidelberg, Germany.”

This was the era when mainframe computers were typically a ‘glass house’ phenomenon; they were usually found in custom-built, climate-controlled data centers because they were expensive compared to today’s commodity-priced computer hardware.