Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "A Wired article discusses the relative decline of Dell, HP, and IBM in the server market over the past few years. Whereas those three companies once provided 75% of Intel's server chip revenue, those revenues are now split between the big three and five other companies as well. Google is fifth on the list. 'It's the big web players that are moving away from the HPs and the Dells, and most of these same companies offer large "cloud" services that let other businesses run their operations without purchasing servers in the first place. To be sure, as the market shifts, HP, Dell, and IBM are working to reinvent themselves. Dell, for instance, launched a new business unit dedicated to building custom gear for the big web players — Dell Data Center Services — and all these outfits are now offering their own cloud services. But the tide is against them.'"

What's the point?1. you use less parts and cheaper parts in the power supply.2. you have fewer and shorter cables3. you use 5V, 3.3V, regulators that are the right size for the job. this saves space and saves material4. you get to choose where to put these regulators so that heat management can be more optimal5. it's easier to integrate the 12v battery with the space saved

Other motherboards make use of similar dc-dc converters and have for a long time. It's nice to have a 12vdc bus; makes it more dense. But it's neither innovative or unique. Instead, it's all about density and design for a specific purpose. These aren't retail-able machines. And there are now luscious racks you can obtain with lots of dense Intel, AMD, and even ARM-powered systems. If you have the application, someone has a design.

I've always been appalled by the way PCs rely on big, hot, wasteful noisy internal power supplies. When IBM entered the workstation market, 30 years ago (Oh, Lord, that makes me feel old) I worked for a company that made a pre-PC x86 system [computinghistory.org.uk] that relied entirely on external, passively cooled power supplies. To me, this was clearly the way of the future, but once IBM entered the market, everything had to be IBM compatible, even the way the power system worked. Because if you couldn't use IBM-compatible power supplies, your system cost too much to build. (I once had to throw out a perfectly good Zenith PC with a blown PS; although it was mostly IBM-compatible, its power supply was proprietary, and cost too much to replace.)

So, Google can't go into the hardware business, because their machines would cost too much and would rely too much on proprietary infrastructure. Easier to justify using your own technology regardless of cost when you're gigantic and profitable.

HP and Dell's nightmare isn't Google. It's cloud computing in general. The cloud providers (which includes Google, if you ignore the fact that they only provide high-level cloud services, unlike Amazon) mostly build their own hardware. Those that don't buy cheap no-name hardware.

The cloud providers (which includes Google, if you ignore the fact that they only provide high-level cloud services, unlike Amazon) mostly build their own hardware.

Google provides low-level cloud services (IaaS in the form of Google Compute Engine, PaaS in the form of Google App Engine, RDBMS-in-the-cloud in the form of Google Cloud SQL, bucket-style storage in Google Cloud Storage) as well as higher-level services (all of Google's various apps build on their cloud infrastructure.)

So the Google-Amazon distinction drawn in the parenthetical is inaccurate.

I've always been appalled by the way PCs rely on big, hot, wasteful noisy internal power supplies.

I don't follow your complaint. I'm sure you don't have a MORE EFFICIENT, SMALLER and QUIETER, PASSIVE, EXTERNAL power supply. Obviously, you threw in at least one trait that isn't possible in combination with the rest. In particular, it's incredible just how much air and heat a tiny little 12v fan can move, even the almost completely silent ones (see: SWiF2-1200 or 800).

Internal power supplies that don't make a lot of noise are becoming increasingly common now. But for most of the PC's 30-year history, PC PSUs have been noisy power hogs. It was only when people started worrying about energy waste that anything was done about it.

I'm probably guilty of overstating the potential of passively-cooled PSUs. I just noticed that they seemed to work well on some pre-PC systems I worked with (you dislike cables, but I dislike noise, and everybody dislikes wasting energy) They only d

That's great...for a server or specialized workstation. For your bog standard PC tower you are talking 3 to 5 slots which can be pretty much anything and could need anything from practically no power to assloads of power. You can have onboard or discrete sound, hell you can put in 3 graphics card with each being fed a ton of power, there is simply too many different things to have it all fed by a passively cooled power brick.

Personally I LIKE the current design, because that means I can quite easily desig

Cloud computing is a fad. The reason why is BGP. BGP means that there's nothing but statistical luck that your connection to your data will go through. The biggest companies in the world (and the largest purchasers of IT equipment) will not ever use it. It will always be relegated to the consumer and the small business, who don't have much to lose if they can't access the data.

At some point, some genius will invent a new internet protocol that will enable the data to be stored local to the owner but can

Your statement about BGP makes no sense to me. How does BGP interfere with cloud-type connections and not others?

You seem to be claiming that that cloud computing is simply impossible. And yet Google, Facebook, Amazon, Salesforce, and Microsoft all operate huge data centers that run only cloud technology. Not only are big companies using it, but they're selling their excess cloud capacity. That's how the two biggest cloud services got started: Amazon and Salesforce developed cloud technology because they ne

Your statement about BGP makes no sense to me. How does BGP interfere with cloud-type connections and not others?

He is rehashing - in a rather rather pained and circuitous fashion - the "if you lose your internet connectivity you can't do any work" argument.

This point is not entirely without merit, but generally fails to recognise that a) most companies these days can't do a lot of work without an internet connection anyway and b) internet connectivity is usually a lot easier and cheaper to make highly available and redundant that server infrastructure.

Not in the least bit, Google designs their servers to optimize power usage and absolute lowest cost per compute cycle. Those are not the same goals for every server buyer. For instance single threaded performance is a large factor for me because we run a lot of interactive workloads that are single threaded or weakly threaded but Google doesn't really care about single threaded performance because they're optimizing at the datacenter level. I also care a lot more about the reliability of any given unit because my jobs are mostly traditional single-server jobs with only my most critical workloads being clustered so the loss of any given node has a significant impact on my overall reliability whereas Google can lose dozens of servers a day per datacenter and it would have no impact on their overall operations. Another example is storage, Google uses COTS SATA drives with horrible MTBF stats and they do so without RAID protection, the only application where that might remotely have a chance of working for me is Exchange 2010 because I have four copies of each database online and the client is seamlessly pointed to a working copy.

Google's server architecture is custom made for their datacenters and built around their application. What they could offer is a turn-key datacenter thet requires a similar workload to theirs... and it is not their business to do so.

Google's solution is cheap, UNRELIABLE servers. I liked the idea of a built-in battery for about 5 seconds, until I realized that the PSU isn't going to have any way to do a weekly self-test of the battery, or allow hot-swapping it... the features that separate decent UPSes from low-end consumer crap. I liked the idea of motherboards stripped of unnecessary components, until I saw it only h

Why does that matter? The only justification for bonding with 10g these days is "redundancy" and I've seen many more outages (at a variety of sites) from people failing at bonding than I have from switch failure.

If a machine is that critical the service it runs shouldn't live on a single machine.

Even at my last job where we had a design based on multiple SPOFs we lost machines to PSU or drive/RAID failure several times, but never network, except for the one site that did "redundant" NIC

Even at my last job where we had a design based on multiple SPOFs we lost machines to PSU or drive/RAID failure several times, but never network, except for the one site that did "redundant" NICs.

I've never seen anyone "failing at bonding", and any such misconfiguration would be picked-up by the monitoring system before a given server went live, so your trained-monkeys appear to be highly defective, and you clearly need to get them traded-in for better ones.

But mdadm *does* beat at least some of the enterprise $700-$1500 ones as well. My LSI MegaRAID SAS 9261-8i cost me about $900 (the battery alone was around $300) and it's slower than snot.

I was raking in 800 MB/s seq with mdadm on an empty 8-disk RAID-50 using a bunch of $30 "cheapy" SATA HBA, but when I switched the exact same drives to hardware raid, the most I could get was 250 MB/s (seq) on an empty array and 160 MB/s at 85% full. Not to mention the random read I/O of 1 MB/s (yes, one MB per second -- n

I know. It is crazy! No hardware RAID is running any sort of software on it, right?? That would be batshit crazy!! It is all baked into the fabric of spacetime.

I trust my single-purpose RAID controller card a lot more than my general purpose operating system to get the write right.

Software RAID can't have battery backup. No sirreee! Those UPS things are not for commodity hardware anyway, only for big iron.

A UPS is not infallible since your server's operating system is subject to other failures such as someone yanking the power cord(s), hitting the reset button on the server, or an operating system crash. A hardware RAID card is not subject to any of these failures, if the power is yanked before it writes data, it will remain in the cache to be retried when the disks are available.

And journaling file system?? That only exists on the hardware from the big 3! Software RAID 1 implodes into a tiny blackhole everytime you run your rsync. Everyone knows that!

your RAID controller dies: good luck getting the data off the disksThis is such BS! The RAID controllers from the big three have placed redundant copies of the metadata on the drives for at least a decade. All you need to recover the array in the event of a card failure is to place them into another server with the same generation controller or replace the failed controller. Heck when HP designed their own hardware you could even move an array out of a Proliant and place it in an MSA array and the array wou

All you need to recover the array in the event of a card failure is to place them into another server with the same generation controller or replace the failed controller.

Exactly. You have to go out and buy a new controller. In some cases, you have to match the firmware version. In reality, when you buy a controller card, you should probably buy a second card as a spare in case the primary card dies.

There is no such complication using software RAID under Linux. I don't have to ask if the vendor has ma

Dude, you can still buy any Dell, HP, or IBM raid controller ever produced because they sell each model by the millions. The last time I had to match firmware versions was like 8 years ago with an IBM controller, it's never been an issue with HP (there may be potential issues between uncertified controller and disk firmware combinations but they're a hell of a lot less likely than similar problems with "let's buy a bunch of generic HDD's and pray they all play nicely with whatever controller I bought"). If

You seem to have missed my point that I have had a bad experience with performance when using a hardware RAID controller. I got a lot of support from the vendor, who agreed that I was using appropriate enterprise-class drives. Eventually the conclusion from the vendor was that RAID cards were built to optimise certain types of usage at the cost of poor performance for other types of usage.

Well, that's the difference between rolling your own RAID array and buying from the big three -- you won't be talking to the RAID vendor, you'll be talking to the company that built your system, and when they isolate the performance problem, they'll get custom firmware from the RAID card maker and/or the drive manufacturer.

Apparently what I was doing (or rather, what the company I was working for was doing) just didn't fit into the envelope of performance for that card.

What is this special use case?

In my experience, hardware RAID has other disadvantages: flexibility -- can you re-shape or extend your array with a hardware card?

I have seen really terrible performance on real hardware RAID cards using enterprise-class hard drives. And, yes, I am 100% certain that it was not a fakeRAID controller card.

Hardware RIAD in not a magic bullet for performance and they come with a nuymber of disadvantages (your RAID controller dies: good luck getting the data off the disks).

What kind of workload were you running? As I said, hardware raid is typically faster than software raid, especially for writing to RAID-5/6 volumes. If your workload is mostly read-only, then you may not see much (if any) improvement with hardware RAID.

I use RAID to protect me from server downtime more than to protect my data - even if I have redundant servers, if one server in an HA pair is down, then I have no redundancy left so I use RAID (sometimes with dual controllers), dual power supplies, etc to hel

I trust my single-purpose RAID controller card a lot more than my general purpose operating system to get the write right

You know why Sun invented ZFS right? Or many of the IBM big clusters (Blue Gene) have no hardware controllers beyond simple SAS/FC HBA's for their data stores? I wouldn't trust any particular part in a computer to get it right, all it takes is 1 flipped bit. And HW RAID is particularly bad in keeping data portable.

A UPS is not infallible since your server's operating system is subject to other failures such as someone yanking the power cord(s), hitting the reset button on the server, or an operating system crash. A hardware RAID card is not subject to any of these failures, if the power is yanked before it writes data, it will remain in the cache to be retried when the disks are available.

Again, problems which have been solved by most if not all current file systems (except NTFS). The problem with those hardware RAID cards of yours is also that they need batteries to keep such data

If I have a vertical architecture, then I want a box I can get someone onsite in 4hrs or less.

And that ain't Newegg, that is an Dell or HP sized company.

Management turned down my plan to have a second server. It was to be the identical model, but without all the disks and redundancy. They figured HP's 4-hour response time would be better than a hot spare server.

Then the crash came.

A nice fellow showed up within 4 hours, with the "most likely" part. It wasn't.The next day, more parts. Nope.The next day, two nice fellows showed up and replaced every part but the case. That solved it.

The cost of downtime was so far beyond the cost of the spare server that

As much as anything, I think virtualization is murdering the market. I bought a $3000 server that hosts six VM guests; two Windows installs (one a DC, one an Exchange server) and four Linux. A couple of years ago, I would have needed at least three servers to do it (one for each Windows install) and one Linux. Admittedly they wouldn't have to have the balls that the new server has, but still, I think we'd be talking about $4000 to $6000 in hardware. Even worse, these are all just basically images sitting on hard drives, so they can essentially be perpetual. Two or three years, when the current server dies or I decide I need more juice, just move the VM images over and away I go, and with hardware prices the way they are, I doubt the next generation server will cost any more than the one I have now, and maybe even less.

Factor in the cloud, VPS hosting and so on, the demand for servers will inevitably drop.

I run a co-op VM cluster on Ganeti. We bought 3 supermicro 1U single-socket machines (12-core AMD, 64G of ram) for about $7,000. We have about 60% of our capacity rented out. The nice part is we allocate based on 1G of ram slices so you get a pretty powerful minimum server.

Makes you think what the cloud is doing to the OS server market. It seems only the M$ managed parts of the cloud make M$ any real money and the rest of the cloud is running OSs that keep revenue within those parts of the cloud controlled by those operators. Taking the point of view that the cloud is a whole and not really separated as it is made to appear because you can tie your services to more that just one operator, especially considering the risks of the cloud.

Search for your own. Priced one from hp/dell and it would have cost $6,000 plus. Built it with the same specs for $3000. That right there is why their server sales are dwindling.

The difference is not always so dramatic.

My local whitebox builder can put together hardware equivalent to a Dell R720: dual E-2620 CPU's, 32GB RAM, dual 1TB disks with onboard RAID (i.e. fake RAID) for $2800 with one year carry-in warranty. Dell charges $3566 for the the equivalent server but includes a 3 year next business day on-site warranty.

So the dell costs $766 more, or think of it as $20/month for on-site service.

If you're a large shop (or a very small shop) and don't mind taking care of motherboard

The problem is that you have to support all of that equipment you just threw together all piecemeal-like. Do you have spare parts available? If no, how much does it cost to have them shipped overnight? Are they still available via retail channels or do you have to dredge through eBay? How much does it cost to purchase and store spare inventory? Do you have the equipment to test for failed components without the possibility of frying other equipment?

Buy from the Big Three but get it refurb.You can get them with the original 3 year 4 hour warranty still in place. Extend it if you need that, or better yet buy another one and there is your spare parts.

The problem is that you have to support all of that equipment you just threw together all piecemeal-like. Do you have spare parts available? If no, how much does it cost to have them shipped overnight? Are they still available via retail channels or do you have to dredge through eBay? How much does it cost to purchase and store spare inventory? Do you have the equipment to test for failed components without the possibility of frying other equipment?

Those "Big Three" server companies charge more because of service and support so you don't have to worry as much about those things. RMA and forget. And yeah, I'm saying that with a straight face.

There are times where a company is small enough to where your tech has enough idle time to deal with a white box server. Other times, your techs are better utilized doing other work.

The Big 3 have the same problems. I've seen lots of IBM servers have failed RAID controller batteries, which IBM won't replace under warranty because they're "consumable", and won't replace for a fee because they aren't available anymore. On the other hand, installing a third-party part voids the warranty anyway.

I've seen lots of IBM servers have failed RAID controller batteries, which IBM won't replace under warranty because they're "consumable", and won't replace for a fee because they aren't available anymore.

The Big 3 have the same problems. I've seen lots of IBM servers have failed RAID controller batteries, which IBM won't replace under warranty because they're "consumable", and won't replace for a fee because they aren't available anymore. On the other hand, installing a third-party part voids the warranty anyway.

Under 15 USC 2302(c), they can not require original equipment be used. If the 3rd party component (in this example a battery) can be shown caused damage, then they may have grounds to deny a warran

That particular problem was solved a few years ago when they introduced flash backed write cache. Basically it's a supercap or bank of regular caps that will power the controller long enough to push ram contents into a flash module. I won't buy anything else and in fact HP stopped offering battery backed units with the gen8 servers.

I'm only really familiar with SuperMicro products, but they offer a pretty standard warranty [supermicro.com] for their servers. Since they use pretty standard components, rather than vendor-specific stuff or firmware-locked drives (see my other post), spare parts are pretty easy to come by. They had all the standard features like IPMI ("Lights Out"), redundant power supplies, etc.

RMAing broken hard disks to Sun was an exercise in frustration and delays. It literally took weeks to get a hard disk replaced under warranty.

To some extent virtualization has done away with even this. Frankly, I doubt I will ever run a server that isn't a guest, unless I'm looking at something like a dedicated backup server (which I have right now) or some very high capacity database server (for my business's needs, I can't see that happening any time in the near future). So for most of my needs, I'd be buying something good RAID, fast drives, lots of RAM and CPU that I can install VMWare or Debian with KVM or Xen support on (running KVM right n

How many layers deep does that go?Sure you can flop over to a hot spare, but getting parts in 4 hours or NBD is still valuable compared to ordering them and waiting 3 days. Lots can happen in those 3 days.

...looks alot like the one from 2008. Big three = hardware warranty and support: drive dies, Dell guy's there in less than 4 hours. That covers the entire lifecycle of the server (3-5 years) while it's in production and playing a mission critical role.
Virtualization/consolidation/cloud are whittling away at the server market, but it's never going to go away. Right now I'm dealing with an EC2 instance that won't start and I can't detach the volume to try to snapshot it or mount it to another new instanc

Or we think that our time costs, but it costs less than business downtime does. If you depend on the vendor and their support contract, you're impacted for however long it takes them to come out. They won't typically let you keep spares, so when a part breaks that box is impaired or off-line for whatever your contract response time it and there's nothing you can do about it. But if it's a white-box server that can be worked on in-house, you can typically keep spares on the shelf. It may cost more in admin/t

Or we think that our time costs, but it costs less than business downtime does. If you depend on the vendor and their support contract, you're impacted for however long it takes them to come out. They won't typically let you keep spares, so when a part breaks that box is impaired or off-line for whatever your contract response time it and there's nothing you can do about it. But if it's a white-box server that can be worked on in-house, you can typically keep spares on the shelf. It may cost more in admin/t

Back in the day (say, 2008 as in the article), if you wanted to buy a server, you'd buy one from the big three.

If you wanted a piece of shit (and let's be fair; there are plenty of times that a piece of shit is exactly what a situation requires), then yes; a server from the big three was the way to go. If, however, you wanted something "better" than that (the quotes are due to the admittedly subjective use of the word), you ordered a Supermicro or Intel serverboard, server case, high quality power supplies, etc, etc... and you never looked back (not if you belonged anywhere near a server, anyway!).

So what should you get for your first server. I.e., you're a small company. You've got a couple of laptops. You're outgrowing mutual Samba.

You maybe want a fileserver. Maybe it'll have a few NICs and a virtual machine on it (Xen?) will do double duty as a external webserver.

Erm, if you're a small sub 10 man outfit (say engineering for example) and need storage in this day and age you just buy a $3-400 QNAP NAS and 4 $100 2 TB disks. You've got to be pretty out of it to deploy a file server over a NAS box.

This can be expanded by a cheap server and run SBS or Linus. business this small have been using non brand name Intel Xeon white boxen for over a decade, this is nothing new. Because a QNAP supports iSCSI and LDAP you dont need excessive storage in a server to have Windows/AD

Hell, for most small companies, two single drive NAS units that have automated failover and synchronization are all you need. Throw in external monitoring and plug-and-play backup redundancy for off-site and you are golden.

The MyBookLive units work pretty good in this respect, but I haven't bothered to do automated failover. We just use them for off-site backups with an rsync script that runs on the server.

Add in a nicer router like a Cisco ASDM 500, and you are fine until you need an accounting server...

Well, assuming you're just doing file stuff, one of the commonly available NAS solutions with a box full of disks and multiple file protocols would work great. If you're tiny, your external webserver will be at dreamhost or something (I might have said GoDaddy here in 2008), because you're not going to have a real network connection. More likely your network will be on par with your server equipment and it'll be a cable modem or DSL. Personally, and this has been my business niche a LONG time, so I hate

One does not pay the premium for hardware from the Big Three because it's a bargain.

Of course you do, if its a rational decision. You buy it because the expected combined cost of the hardware + support + cost of expected downtime and other losses despite the support is lower than with the available alternatives. Its bargain hunting, just with a wider scope of costs included in the analysis than just the sticker price of the hardware.

Why bother with branded parts made by an ODM when you can buy directly from the ODM?

My old workplace had (has, probably) a fairly beefy Sun server with a whole bunch of disks. They used it as a RAID-based storage server for a bunch of lab data. As they do on occasion, a hard disk would crap out. The server wouldn't take ordinary disks, though: it would only accept Western Digital disks with some Sun ID code baked into the firmware -- rather than simply being able to buy a few WD RAID-friendly disks ahead of time, we had to jump through Sun's hoops to get disks replaced under warranty. This usually was a multi-week process, during the array with the failed disk was running with a hot spare -- hardly ideal. That was the last time we bought Sun systems.

At some other point, we were planning on setting up a few more storage servers for backup data. Dell's price for a storage system, including firmware-locked drives, was about triple the cost of doing it ourselves with SuperMicro servers, MD-based software RAID, and RAID-friendly disks. We ended up buying two of the SuperMicro-based systems and putting them in different buildings for semi-offsite backup (the concern was if the server room caught fire, not if a meteor affected the whole city). The only extra step during the setup was putting the disks in their caddies: the Dell systems came with the disks pre-installed. That took about 5 minutes per server. Whoop-dee-doo.

The Dell servers restricted our (with firmware-locked disks) options and cost substantially more than doing it in-house. We'd be stupid to go with their products, as we'd be locked to that vendor for the life of the servers.

Sure, we had Dell Optiplex systems as the desktop workstations for researchers as they were inexpensive, reliable in the lab, and essentially identical (useful for restoring system images from one computer to another), but their server stuff is stupidly overpriced.

The SuperMicro servers were much more "open" in that they used pretty bog-standard parts and didn't have stupid anti-features like firmware locking.

First, a RAID array does not "[run] with a hotspare." When a failure occurs, the hotspare becomes a fully integrated member of the array, at which point you would be running without a hotspare, which on a redundant array isn't that much of a problem considering the Dell replacement would be there within 4 hours of reporting/determining a hardware failure.

It took Sun 3+ weeks to send us a replacement hard disk under warranty and required multiple phone calls. This happened on multiple occasions and was one of the main reasons we decided to stop buying Sun servers.

Yes, the spare became an integrated member of the array. That's true. My point was that the hot spare was now a member of the array and we had no remaining spare disks in the array. Since the server hardware only allowed drives with the Sun firmware, we couldn't keep a supply of spare disks around to swap into the arrays as needed.

Second, Dell servers do not have "firmware-locked disks." I've never heard of such a thing. It's a pretty absurd concept that you could only have OEM hard disks in your box, and an unrealistic expectation that clients would comply.

They did [dell.com]: "In the case of Dell's PERC RAID controllers, we began informing customers when a non-Dell drive was detected with the introduction of PERC5 RAID controllers in early 2006. With the introduction of the PERC H700/H800 controllers, we began enabling only the use of Dell qualified drives."

Software RAID was perfectly adequate for our needs: as backup servers they didn't need to have the utmost performance. As a bonus, we weren't reliant on a specific make and model of hardware RAID card: we could connect the array to any system running MD. Even under heavy load the demand on the CPU was negligible.

The Sun server was the main Samba share for the lab: lab instruments would write data to it and researchers would access that data on their desktops. It also used software RAID with multiple arrays set up. CPU usage was similarly low, even at high loads, and it worked quite satisfactorily for the lab.

You might have saved money up front, but over the life of the server, you could potentially lose much more when you consider catastrophic hardware failure which would be fully covered under the warranty of the Dell box.

SuperMicro offered a comparable warranty, so that wasn't really an issue.

At the beginning of August I got a quote from dell for 2 R710 servers and 4 R610 servers. Three weeks later I placed the order. The response? Sorry, we're not selling those any more. You have to buy the R720's instead and they're more expensive.

So, sorry Dell. I won't be considering you for the upgrades to the other 200 servers I manage after all. Pity because HP just pissed me off with the DL380p gen8 which can hold 16 drives but has no raid card which can use more than 8.

Nope. It handles 25 drives in the dl380e, the underpowered economy version of the dl380. In the dl380p it handles 8 drives only. If connected to a drive expander to multiplex the 8 sas channels, it refuses to boot and prints a message saying to disconnect the expander.

The p822 can handle 200 hard drives as long as 192 of them are in external chassis. It can handle exactly 8 drives inside the dl380p. It is not compatible with HP's internal SAS expander card the way the previous generation of smart array controllers were.

Also, the 25 drive version is not a dl380p gen8, it's a dl380e gen8. The E or Economy series is a distinctly lower end box with a maximum processor speed of 2.4ghz.

But if you've implemented a DL380p (not e) with the expansion drive chassis to bring it to 16 drives and gotten all 16 to work on a single controller then by all means tell me what configuration you used. If you've found the hidden magic, I'll be only too happy to eat my words.

Does not work. The current smart array controller (the old ones aren't compatible with the chassis) reports: "This smart array is not compatible with the expander. Please remove it and reboot." Bootup then stops.

While yes, right now, the tide may be against the server manufacturers -- the cloud still requires them in large quantities to host those services. If it negatively impacts sales, it's only to the extent that efficiency is improved. (EG. Joe Businessman who once bought a server for his office of 10 employees skips it, in favor of cloud computing solutions. But it turns out his needs are small enough so they can share the load with 1-2 other small businesses like his, all on a single server in the cloud.)

In my opinion, Dell has the right idea -- changing the focus on who their customer is for their server products. Beyond that, what's really news here?

Going out on a bit more of a limb though? I'm really of the opinion that cloud services are over-hyped as the "in" thing for every business. Once companies migrate heavily to cloud hosted solutions and use them for a while, a fair number will conclude it's not really beneficial. Then you'll see a return to the business model of running in-house servers again. (Granted, those servers might be smaller, with lower power consumption than in the past. Little "microservers" handle many of the basic file and print sharing work companies used to relegate to full size rack mounted systems in the past.)

But my own experience with cloud migrations tells me that it's not so great, 9 times out of 10. For example, my boss has been using the Neat document management software for a while now to scan in all of his personal receipts and documents at home. Neat now offers "NeatCloud" so you can upload your whole database and then access your docs via an iPhone or iPad client, or even scan something new in by simply taking a picture of it. Sounds great, but in reality, he had nothing but problems with it. The initial upload tied up his PC for the better part of his weekend, only to report that some documents couldn't be converted or uploaded properly. He had close to 100 random pages of existing documents thrown in a new folder the software generated, to hold the problem ones. The only "fix" for this was to click to open a trouble ticket for EACH individual document that failed, so someone at Neat could examine it manually and correct whatever issue prevented their system from properly OCRing and uploading it. Clearly, that wasn't much of a solution! He tried, repeatedly, to get someone to remote control into his PC to do some sort of batch repair for him -- but after a couple promises to call back "the next day" to look at it, nobody ever did. Now, all Neat can tell him is they have another update patch coming out for the software in the next week, and to disable cloud uploads until that time.

Or take the recent migration a small office did from GoDaddy pop3/smtp email with Outlook to Google hosted mail. I usually help these guys with their computer issues but they thought they could tackle this migration on their own. Turns out, they wound up with a big mess of missing sub-folders of mail in Outlook on the owner's machine. After a lot of poking around, I discovered part of the problem was due to characters in the folder names that Google Apps didn't consider valid. When it hit one of those during the mail migration, it just skipped the whole mail folder upload with an error. (Did Google's migration wizard utility even warn about this in advance or offer to help rename the problem folders before continuing? Heck no!)

For that matter, take what you'd think is pretty basic functionality with cloud based data backup? I've run into multiple situation now where people used services like MozyPro for their backups, only to discover a full restore (when a drive crashed) was incredibly slow and kept aborting in the middle of the process, making the data restore essentially impossible. Mozy's solution? They're willing to burn a copy of the data onto optical disc and physically mail it back to you. So much for the whole cloud thing, huh?

While yes, right now, the tide may be against the server manufacturers -- the cloud still requires them in large quantities to host those services.

Google's position on the list of Intel server-chip buyers makes it clear that the problem isn't for people server manufacturers (which Google, very much, is), its for server vendors. Sure, the cloud requires servers. But if the people selling cloud services are also building their own servers, that doesn't create a market for server vendors.

It may also depend on what kind of servers companies like Google want. Dell, HP and the like produce expensive servers with high-cost maintenance contracts, which look great to conventional business-executive types. Google, OTOH, probably is taking the techie approach of generic white-box servers with no support. They're installing their own OS image on it, and it's not going to be Windows or a commercial Unix, and with all Google's custom software they probably find vendor support all but useless. Ditto ha

"The Cloud" is only good as secondary backup if you don't care that it becomes public.

Encrypt it all you want. Access to your data is the hardest hurdle and by using the could you give it away.

I'm thinking that people who want to "be in the cloud" don't think about stuff like encrypting. "What, me--worry? I'm using the cloud!" En/Decrypting is work, and the whole idea of the cloud is to avoid work. If any crypto is being done, it's probably a service operated by your friendly (non-local) cloud provider, which means it provides no real security at all.

This willingness of businesses to surrender their family jewels—their data—to complete strangers has puzzled me since this type of serv

The benefit of the "cloud" is reduced costs, and certainly doesn't mean it's insecure.

Tarsnap (a backup service), for example, is very much a cloud service (runs on EC2 and stores the user data on S3), yet it encrypts each archive you upload with a random AES256 key that is then itself encrypted with an RSA key that never leaves your machine, and the whole thing has multiple levels of signatures (to prevent tampering).

It's also designed and run by the FreeBSD Security Officer, which isn't a position given e

Some lad was trollin once and said "BSD is dieing, netcraft confirms it" citing the latest publication by netcraft then it kind of took on a life of its own and now nothing's dieing until Netcraft confirms it

And everything to do with VMWare. No one is buying servers because they have no need to. When I can replace 400 physical boxes with a couple dozen ESX hosts why wouldn't I?

I guess another way you can look at it is Intel has innovated themselves out of a market. Multi-core procs have enabled the virtualization boom, but they didn't charge enough for them. At least the auto industry was smart about it - new cars last twice as long as cars from 15-20 years ago, and prices have gone up accordingly.