Feedback of Servers

I am looking for Blade Servers for virtualization. And SAN Storage to start
with.

Need your feedbacks as which are ideal to have.

1) Which Brand (HP/Dell/IBM)????
2) Which Config ½ height or Full height?????
3) Booting from SAN Storage or Having SSD or Normal HDD on board. ???
4) Any Other Requirements to be taken care of Like A/C, Power???

Dear Ninov,
Regarding SSD.....
SSD benchmark are off the charts compared to the conventional drives and
SATA/SAS. SSD is a fairly new product in market and still got long way too
go. Most Sever boards support SAS by default especially blade servers. SSD
with USB interface have been available for sometime but Toshiba is one
vendor which have built SSD for SAS interface. Now even the servers comes
with the SSD option HP:Blc260, 280c / IBM: HS12, HS22 / Dell:M905, M805.
There are lots of details which we take into account and lots we dont, until
it just gathers up to be huge mess.
A. SLA and Legal Terms and Conditions:
1. Accusation cost (Equipment, Shipping, Vendors percentage, Support,
Deployment, and Operational test).
2. Vendor Support (warranties and reported/respond time).
Blade Servers:
Note Most of the figure are depended on type of board being used.
Half Height:
1. 2 Storage slots SAS/SATA/SSD
2.1 or 2 I/O Expansion Slot or Mezzanine Expansion Slots*
*(Most commonly used for Battery Backed Write Cache (BBWC) storage
controller and Fiber Optics I/O or SPF(Small Form-Factory Pluggable)
Controller.
3. Network Interface various options are available most now by default give
2 1Gbe Multifunction port. (10Gbe controllers also now available built-in
with main board, depends on need to save the cost and space for future
expansion)
4. Memory almost stay the same in both Half and Full length Blade Servers
depends on the board but now both supports around 128GB
5. Blade Server Power Management is mainly done by the Blade System Chassis
and it does carry heavy duty power supplies. When the system start all power
supplies start up with it, but gradually it turns off ones not being
utilize. All the blade power supplies are not on all the time they work as
redundant option if in case one other fails.
6. RAID Controller depends on what type of storage and how you want to
implement RAID.
Full Height:
1. 4 Storage slots SAS/SATA/SSD
2. 3 or 4 I/O Expansion Slot or Mezzanine Expansion Slots*
*(Most commonly used for Battery Backed Write Cache (BBWC) storage
controller and Fiber Optics I/O or SPF(Small Form-Factory Pluggable)
Controller.
3. Network Interface various options are available most now by default give
2 1Gbe Multifunction port. (10Gbe controllers also now available built-in
with main board, depends on need to save the cost and space for future
expansion)
4. Memory almost stay the same in both Half and Full length Blade Servers
depends on the board but now both supports around 128GB
5. Blade Server Power Management is mainly done by the Blade System Chassis
and it does carry very heavy duty power supplies. When the system start all
power supplies start up with it, but gradually it turns off ones not being
utilize. All the blade power supplies are not on all the time they work as
redundant option if in case one other fails.
6. RAID Controller depends on what type of storage and how you want to
implement RAID.
Booting Server:
It is always recommended to boot sever from primary drive which is your
internal SDD/HDD. Also other factors (RAID, Clustering, Backup and
Restoring) involved. A OS contains lots of details of server and drivers and
HCL (Hardware Compatibility List).
(if you are sure how many severs you would need and extra storage in chassis
then go with what's best available or else get the maximum number of severs
provided option in a chassis) There is a valid reason it saves you cost of
buying extra racks to fill in the space which leads too saving extra wiring
required and even at times power depending on the specs. It also provide
concentrated cooling environment if planned cost saving from electricity
bills.
Checklist of things:
1. Anti-static Floor
2. Chokless lights
3. Wright Band for ppl working inside sever room
4. Disallowing all petroleum bi-products inside server room, shoes,
sweaters, jackets, wool, nylon, leather all out of the server room.
5. Precision Cooling (Concentrated airflow, Hot aisle / Cold aisle).
6. Humidity Control (index: 32-38)
7. Fire/Smoke systems.
8. Industrial Sockets and Breakers for power backbone.
few other details :D

It depends on your plans and existing environment. How many virtual servers do you plan on creating & how much space will they all need? What do you have for hardware now and are you going to virtualize any of it? IBM only has one height of blade, but they have some that are double-wide & multiple chassis to fit different needs. The Dell chassis & blades are based on the same design as HP, so they have half & full-height blades, but only a full-sized chassis. HP has 2 sizes of chassis as well as half/full-height blades, tape blades & storage blades. They all have a variety of 1, 2 & 4 socket blades depending on the horsepower you need.

VMware and Citrix can use a USB stick in the blades to hold their OS, so don't need any internal drives, and Microsoft might be able to do the same with Hyper-V. SSD and boot-from-SAN is fairly expensive to use just for a hypervisor, so internal SAS or SATA drives are a better option for the servers. Fibre Channel or iSCSI are your normal options for external storage of the VMs, with FC typically being higher cost and requiring some level of expertise, but generally higher performance. iSCSI is generally lower-cost and requires a bit of tweaking to get maximum performance, but vendors like LeftHand Networks and EqualLogic also bring a lot of extra benefits to a virtualized environment.

All blade solutions require less power & cooling than the equivalent individual rack servers, but they fit that into a smaller space, so the per-U power & cooling needs in the rack will actually be higher. This means most companies can't fill a rack with blades like they would regular servers. Blades also have the added benefit of fewer cables to string for network, storage & power.