With this setup you get 16 cores, 72gb of ram, 1x-60gb ssd for host OS and 3x-250gb ssd for vm's.

I set this up with server 2012 on the 60gb ssd and then setup a storage pool for the other 3 ssd drives. This netted me 700mb/s reads and in the mid 600mb/s writes. This allows for both a hyper-v setup and a VMware setup on the same computer. Also I tested this with a killawatt and the average idle consumption was in the 90 watt range and never saw it peak above 120watts. 90% of the time it was in the 90 watt range. Also this server is slightly louder than a desktop setup with fans all over and quitter than a highend video card blowing during a good gaming session.

I am very happy with this setup and would recommend this to anyone. Plus this server is on the HCL for ESXi.

I've been running the SH67H3 with an i7-2600 processor, 16GB RAM, 2 SSDs, and 1 SATA drive for a year and a half now and it has been a solid, quiet lady. Love that machine. I keep flipping it between vSphere and Hyper-V as needed. Never complains about anything. Highly recommended.

I built two whiteboxes with AMD 8 core CPU and 32GB of ram... runs pretty well, only thing it doesn't support is FT for the moment, but its enough to get through the VCP and even VCAP. I also use a synology DS412+ for storage along with SSDs and HP v1910 gigabit switch.

I always had fully blown server which were doing my head-in. You could usually hear them buzzing in my garage from down the street. I looked for a solution with multiple server and yet I don't pay Ł$Ł$ for electricity.

First of all, I know nested setups work, but I don't like em

Over the last year I "collected" a few Microserver. I love these little things so I used them as sole hardware for my lab.

Here the lab from "left to right" (currently vSphere until I got my DCA / IAAS, then change to Citrix)

Got a N36L as well - but that is (probably) dead but if fixed then it becomes a dedicated storage server.

As for performance : Good enough. Memory is always the bottleneck - CPU never really ... The standalone box sits usually at 50% when running updates of some sort, but apart from that it is a lot lower - maximum I have seen is 90% when I am "doing stuff".

As for memory though - those 4 essential VMs total up to 14GB but the Veeam and Web server are on only when needed and I am sure I could even lower the SQL server's ram (currently 6GB).

But quite high maintenance and runs actually out of oompf lately .. Just got my old Supermicro Chassis out of the garage so will probably build a new single / nested lab .. We'll see...

I did just fine with a Lenovo laptop with a QC Core-i7, 16GB RAM, and a single 512GB SSD running VMware Workstation. If I wasn't running an iSCSI /NFS storage server for my shared datastores on the same laptop, and wasn't needing the laptop to do actual work, 256GB would have been plenty.

I have a HP DL 380 G5 workhorse with 32GB RAM, dual quad-core Xeon's and about 1.5TB in SAS storage. I have been running a nested ESXi lab for about a year and a half now. I only use 1 power supply. Works like a treat, no performance problems ever.

I'm running nested inside VM Workstation. It's been a little flaky the last couple of days, but I think that's the whole machine due to an updated AV Client.

However: my system

Gigabyte-Z68XPUD4
i7 2700k
16gb Ripjaws gSkill
I've got quite a number of drives in my machine, but my VMs sit on 2 x 128Gb SSD in Raid 0 on an LSI Raid card.
I'm running 9 Drives but 4 are SSDs, however, I had to put a 550Watt PSU in from when my 650 Died which may be on the limits of providing enough power.

I currently have a spare box set up with 2 x 160 in raid 0 and 2 x 120 IDE's in Raid 0 for the purpose of a SAN. I was running FreeNAS but it doesn't do iSCSI very well. So I'm looking for a new way of running it

I have a HP DL 380 G5 workhorse with 32GB RAM, dual quad-core Xeon's and about 1.5TB in SAS storage. I have been running a nested ESXi lab for about a year and a half now. I only use 1 power supply. Works like a treat, no performance problems ever.

Yes, that's what I used for both VCAP's. Does the job quite nicely, there were a couple of niggles here and there. For example, I couldnt turn on FT, couldnt play with EVC (no biggie this one), adding extra NIC's to nested ESXi hosts sometimes required 2 reboots. I believe this wasnt a problem with the physical machine itself, but an issue with nesting ESXi. You may run into them with any nested setup.

I reckon one of the best things about a nested setup is the ability to create as many ESXi's (includes NIC's, datastores, VM's) as your hardware will permit and lab it up to your heart's content. Shared storage works without issues too, just go with the software iSCSI adapter. I had 4 nested ESXi's (with ESXi running on the physical machine too), a few VM's and vApps. I setup SRM too using an EMC storage appliance, worked without issue. Played with multiple vCenter's, Linked Mode, setup multiple clusters, moved hosts in and out, the whole works really. Never missed a beat.

I was working at a MSP and one big customer used four Fusion IO drives in Raid 0 in their SQL boxes to get the required I/O. Naturally they had multiple boxes for redundancy ... Each quad socket Nehalem server had a value of $120k (2U Supermicro) because of that (they had 8 )...

TechExams.Net is not sponsored by, endorsed
by or affiliated with Cisco Systems, Inc. Cisco®, Cisco Systems®,
CCDA™, CCNA™, CCDP™, CCNP™, CCIE™, CCSI™;
the Cisco Systems logo and the CCIE logo are trademarks or registered
trademarks of Cisco Systems, Inc. in the United States and certain other
countries. All other trademarks, including those of Microsoft, CompTIA, Juniper ISC(2),
and CWNP are trademarks of their respective owners.