The Blog is Back!and I Made a Change!
Learn more featured post
Jasonnash
05
December, 2010

vSphere Home Lab: Part 3 – Compute

We’ve covered the networking and storage components of my home vSphere lab build and now it’s time to talk servers.  This is where things get tricky and you really do need to pay attention and make smart choices else you’ll spend more money than you should and/or not be able to take advantage of all the features in vSphere that you want.  This was the biggest surprise I had when building this lab.  I expected to grab some cheap, small i7 type servers and load them up with cheap RAM.  That didn’t really happen…

As I’ve done in other sections the first thing to do is outline my requirements:

  • Minimum of two physical servers
  • Single Quad-core CPU in each and hyperthreading preferred
  • Prefer 16GB of RAM
  • Must be able to support all current vSphere 4.1 features including VMware FT and VMDirectPath
  • Minimum of two NICs with option to go to four
  • No internal disk or RAID required
  • Prefer option to boot off internal USB or SD drive
  • Smaller is better (Mini/Micro ATX)
  • As energy efficient as possible
  • Very quiet

My requirements are pretty straight forward.  The problem is that vSphere does not have an extensive Hardware Compatibility List (HCL).  It’s not like Windows.  This really becomes a problem around storage and networking.  vSphere has no support for software RAID that is very common on motherboards these days so if you want to build a system and use internal RAID storage expect to buy a supported hardware RAID card.  LSI boards are very common for this purpose.  Driver support for NICs can also be a problem though there are some ways to add drivers for those not supported in the standard installation.  It just depends if you want that hassle and if you want to deal with it again as upgrades are released.  A good place to start is here at the ESX/ESXi 4.0 Whitebox HCL.  The HCL lists are sometimes a little dated so check the forums for more information.

The point is that you need to plan your build accordingly and really look at what chipsets a motherboard uses.  If you don’t care about the onboard NICs and storage controller your options open up.  You can always add PCIe NICs to the system, though this can often eat up any money you’d save by going with a cheaper board over one that’s completely supported.

An easy way to get around all of this is to buy a server that is on the vSphere HCL.  The difficulty is finding one that is cheap enough and not built to run in a data center (fans and power usage).  There are some good options out there.  HP has the ML110 line and Dell has the T110 line.  If one of those fits your needs and budget it’s probably the easiest solution.  The downside is that you probably will need to add one or more NICs to them and if you want a faster CPU the price goes up far more than the actual CPU bump would be buying the chips yourself.  That’s the main reason I did not go with one of the prebuilt options.  To go from a X3430 CPU (non-hyperthreading) to the X3440 (with hyperthreading) on the HP was like $100.  Add one or more NIC cards (HP server has one NIC by default) and you’re in for another $100.  It adds up.

The interesting thing about my build is that I ended up very close to several other whitebox builds out there.  I think it shows that great minds think alike and, more likely, the vSphere HCL limits your options.  Phil’s Baby Dragon (nice local storage) is here and Chris’ configuration is here.  Here is the breakdown.  Each of my two servers has:

Let’s go over some detailed information…

Motherboard

Like I said before the key to a good motherboard is the chipset support for NICs and storage as well as the right support for the hardware virtualization options, mainly vt-d.  The X8SIL-F uses the Intel 3420 chipset which is a very solid chipset that supports everything I want.  The only downside is that it also requires ECC RAM.  There is no option to use non-ECC memory, which drives the price of memory up.  But that was a compromise I was willing to make for what else it has.  The two onboard NICs are Intel 82574L and are on the vSphere HCL.  This board also has a third dedicated NIC for IPMI 2.0 management.  This is very useful.  I want full vSphere support so Dynamic Power Management (DPM) can use IPMI for server shutdown/wake.  The IPMI functionality also gives you full remote KVM access to the server so you can run it headless.  Installing ESXi is very simple.  Just remote in, mount the ISO via Virtual Storage, and go.  Here is a screenshot showing the remote display.

Screen shot 2010-12-05 at 1.24.37 PM.png

If you want to put a display on the server you can as the X8SIL has onboard Matrox video.  The great thing about this board is that it just works.  No special oem.tgz file for drivers, no custom installs…it just goes.  Pop the disc/ISO in, tell it to install to the USB thumb drive, and you’re done.  Simple and fully supported.  The price isn’t bad either at like $189.  Again, take the two fully supported NICs in to account.

One quick word of warning.  If you decide to use a different board with a chipset that does all the cool Intel vt features, make sure that the BIOS also allows them.  There are a lot of consumer/desktop boards out there that don’t expose those features in the BIOS and you can’t enable them.  Without that they are useless.  Also, upgrade the BIOS on the X8SIL-F to the latest and then do the same for the IPMI BIOS.  Make sure the NIC configuration in the IPMI interface is set to “Dedicated”.

Finally, I want to put in a good word for Supermicro’s customer service.  I ordered all of my equipment from NewEgg and while NewEgg has great prices (usually) their customer support is iffy.  In fact, I paid the rush fee on my second server order so it would go out that day via UPS 3-day.  Everything went out that day via 3-day UPS…except for the CPU.  It went out Ground from California.  I live in North Carolina…it took a while.  Anyway…  When I was finally building my second server I found that it would only recognize the first DIMM.  The second DIMM slot was bad.  Supermicro offers overnight cross-shipping as standard on their motherboards.  I just submitted a form with my information and the boards serial and that was it.  In two days I had a replacement board.  Very, very pleased with that level of service.

Processor

Choosing a CPU isn’t too bad.  Pick a price point and/or speed.  Decide how many cores you want and if you want it to support hyperthreading.  If you choose a different board be aware that some systems with onboard video require a CPU that supports onboard video.  Without that you’ll need to add a dedicated video card of some sort.  Also, some CPUs are listed as having vt-d support, which is required for VMDirectPath.  I hear conflicting things on this.  The vt-d feature should be a chipset feature but it appears that some CPUs don’t support it and therefore it won’t work.  Be careful there as it gets even more confusing since some Intel i5s did vt-d and now don’t.  Originally I was going to get the X3430 CPU but decided I wanted hyperthreading so I moved to the X3440.  Then I saw it was only like $20 difference up to the X3450 so I settled there.  The X3450 is faster and has more potential for “Turboing” up to higher frequencies.  This is a 95w TDP (Thermal Design Power).  For lower power consumption there are some L34xx CPUs but they are dual-core.  There are also some 135w CPUs out there but I thought the x3450 was the right compromise.

To cool the CPU I’m using the Intel OEM cooler.  It’s VERY quiet (silent really) and does a great job.  There is no need to go with a larger, more expensive cooler.

Memory

The choice of memory was really driven by the motherboard’s chipset.  The Intel 3420 chipset requires ECC RAM and can be a bit picky on the configuration so check the information on Supermicro’s site.  The link in the breakdown above goes to the Kingston compatibility results for the X8SIL-F motherboard.  There are a lot of options there.  I chose two of the KVR1333D3E9S/4G kits for each system.  Reasonably priced with a solid warranty.  Before buying this RAM I confirmed its compatibility with a few other people who run the same combination.  I went with 2x4GB to leave me room later.  Originally I wanted 16GB of RAM in each system but due to the extra cost of ECC (like 30% additional) I am running on 8GB each for a while.  You can do a surprising amount with just 8GB per system so it hasn’t been an issue yet.

If you look at Supermicro’s memory guide they list Hynix memory.  All of my Kingston sticks use Hynix chips.  One odd thing is that I ordered my server parts separately, meaning I ordered one server’s worth of parts to test and then the second once I was satisfied.  My second set of memory kits look different than the first set.  At first I was pretty concerned but after examination the DRAM chips were the exact same.  The only real difference was the size of the PCB they are mounted on and they work great.

MicroATX Case

There are tons of options for cases out there.  Just pick one you like in the price range you want.  Make sure it fits the board size that you use and if you do internal storage confirm the number of drive bays.  I chose the Lian Li V351B due to reputation and known quality of Lian Li’s cases.  The V351 has a removable motherboard tray which makes installing the board easy.  The side panels come off easily allowing you to do any cabling with the board in place.  It does not come with a power supply so you need to add that.  If I did this again I’d probably choose a smaller case, if I could find one.  I’m not sure I could since the only way to save space would be a case without any drive bays and those are usually HTPC cases.  I’m not sure an HTPC case would provide adequate airflow for the CPU.

There are a few notes to keep in mind when putting the X8SIL-F in this case.  If you read Phillip’s overview you’ll see he mentions replacing the fans in the case.  That’s an option but not required.  The fans in the V351 are 3-pin fans and the X8SIL wants 4-pin fans.  Luckily, the X8SIL has backward compatibility support for 3-pin fans on 3 of the fan connections.  I used one for the CPU cooler and two for the large 120mm fans in the front of the case.  The rear fan was just connected to a normal 5v connection on the power supply (adapters included with the case).

As mentioned here in Chris’ writeup the V351’s power LED is 3-pin and the X8SIL wants a 2-pin.  That’s an easy fix…just move one of the pins over to the middle spot on the connector and then you get the nice pretty blue LED on the front of the case.  While not a big deal it’s nice to easily see if a server was put in standby by DPM.

The only real downside to this case is the price.  You can get another case w/ power supply for the same or less.  Keep in mind though that you don’t get to choose your power supply when you buy a combo like that.  I’ve built computers for years and understand the quality differences between cases.  Cheap cases aren’t built well.  The edges are sharp.  They don’t fit right.  They are hard to work with and make changes.  They also will rattle if anything causes vibration.  For this reason I sucked it up and paid for good cases.

Power Supply

Power supplies are not exciting.  I just wanted something quiet, efficient, and as small as possible (so it’s efficient).  The Rosewill Green series is very reasonably priced and is 80%+ efficiency at all power draws.  I’d like to go higher in to the 90%s but the price goes up dramatically.  The 430w is the smallest offering in Rosewill’s Green line so that’s what I went with and have been pleased.  Seeing as these servers are topping out around 120w at full load I think I have head room.  The fan in the power supply is VERY quiet (pretty much silent) and the unit fits no problem.  What else can I say, really?

USB Thumb Drive

For the ESXi 4.1 installation I used some random 4GB thumb drives I had sitting around.  The X8SIL-F has an internal USB port for just this purpose and it works great.  Just boot off the ESXi ISO and install to the USB drive.  Easy.  One thing I didn’t think about is that my 4GB drives were old and therefore really slow.  If I grab the VI Client off one of my two boxes it’ll transfer at like 200KB/s and take 20 minutes so I don’t do that anymore.  Other than that they don’t matter.

Random Thoughts

The full lab has been up for a few weeks now and so far I’m very pleased with the outcome.  I blew my original budget but the end result is a very robust setup that supports all of the features that vSphere has to offer.  Here are some random notes…

Power usage on these servers is great.  I discussed some power utilization and settings in this post here.  The servers idle at 38w and peak out around 120w at full load.  They are very, very quiet.  The only time I’ve heard them at all is when I was stress testing them at 100%.  The CPU fan kicks on full speed and emits a faint whine.  At normal load they are silent.  What’s funny is when I kill my torture test (using Prime95) the fans instantly spin back down and go silent.

Take some time and move your cabling out of the way.  When I built my first server I didn’t bother with that at all and they blocked air flow through the case.  After 2 or 3 minutes of Prime95 torture testing the system would start beeping due to thermal alarms.  I moved my excess cabling up to behind the power supply.  This isn’t a problem at all since the power supply fan is on the bottom of the power supply so it pulls air up from the case and then out.  Perfect arrangement.  Here is an after picture showing the sloppy, yet effective, clean up.  I also moved some unused cables (for like audio) to the front of the case.

IMG_0343.JPG

Again, not pretty but it’s a clear path from the front of the case to the back.  Like this I can run the torture test for as long as I want without any problems.

And one more…  You can’t adjust, or at least I see no where to do it, the IPMI alarm thresholds on this system.  I have a ticket open with Supermicro to ask if there is a way.  This causes one problem.  At idle the cooling works so well that my CPU fan actually triggers the low-RPM alarm threshold.  I have disabled this alarm in vCenter for now but I’d like to reactivate it if possible to watch for a failing fan.  Another option would be to change the fan configuration in the motherboard BIOS.  I have mine set for Balanced.  I could probably raise that up and increase the minimum fan RPM but for now I’ll run alarmless.

Conclusion

After everything is said and done I’m very pleased with these servers.  They are powerful and quiet.  I originally expected to build a server for <$500 but that’s not the case here.  They ended up around $750/each.  You can save money in a few places.  A cheaper case is a big one.  A cheaper CPU (x3440 or x3430) also works, as well as some i3 chips though those don’t support Intel vt-d.  The RAM I’m using is as cheap as you can go with any real certainty.  So there are ways to do this for probably $600 if you give up a few minor things.  This probably isn’t my final configuration.  I plan to add RAM to at least one system.  I may take one to 16GB and leave the other at 8GB.  I figure this will give me resources but also allow me to starve one server easily to do some testing.  Along with RAM I’ll probably also get one or two Intel SR-IOV capable NICs to play with as well..but for a good fully functional server these are perfect.

21 thoughts on “vSphere Home Lab: Part 3 – Compute”

  1. Great post/series Jason! I’m looking at building out two servers for my lab and your posts as well as a few others have helped narrow down the hardware decisions.

    I’m not sure if you had this planned for a future post, or if this is a separate thread entirely, but the biggest issue I’m running into right now is licensing. Ideally I’d like to have ent plus/vCenter/n1kv in my lab, but I’m short on options. I’ve been running on the trial period, which seems to be the only real option, but it’s not much a long-term solution nor can I get the n1kv.

    We aren’t VMware partners at work, so I don’t have much insight, but I’m really hoping VMware/Cisco come up with an option somehow. I guess this also ties into the similar educational IOS issue which I would love to see Cisco address.

  2. Intense post man, this is awesome stuff. Looks like you put a lot of effort into this research. This is really cool. I could never have a server in my house in the past due to noise and power, looks like you have solved a lot of that. Great work man.

  3. I’ve been looking into using a Dell T110 for this purpose. It’s on the HCL, and can be outfitted nicely with a couple extra GbE nics fairly cheaply. Good CPU options, and fairly quiet, with minimal heat and power consumption for a home lab. Just need to build 2 of them and can start tinkering with a portable vMotion lab… on the cheap! (they start at $399!!!)

  4. Do you really need two servers for a lab? I am also building a vmware lab (whitebox).

    Reason 1)
    – research and test various guest OSs, software and configurations

    Reason 2)
    – learn vmware, ha, dr etc
    – although I use vmware at work, everything is in production and it is hard to test the advanced features

    I have seen details on various sites showing how to install esxi 4.1 on top of esxi 4.1 (http://www.vcritical.com/2009/05/vmware-esx-4-can-even-virtualize-itself/).

    I intend to build two guest esxi 4.1 servers, and then connect them to a guest running FreeNAS or OpenFiler. I know the performance will not be that great but the intention is to play with the advanced features of vmware.

    I would rather put the a little more power into a single box than in to tow boxes. This this make sense? Has anyone else done this type of setup?

    I am trying to reduce costs and save space and power by only using one box for the lab.

    1. It’s up to you on how many you want. I could have built a single larger box and run ESXi inside of itself. I do that on my notebook for some customer demos. The downside to that is there are some features you can’t use or don’t work as well. I just wanted a more “realistic” lab for when I do more videos or testing.

  5. I too have gone through 2 super micro x8sil-f motherboards dir to mem issues. The second one had 2 bad dimm slots both in slot 2, channel 1&2. New egg is replacing it. Hope the third one works.

  6. I ended up buying 2 Dell T-310 servers. They were around $1000 per machine, but they are whisper quiet. 375W PSU.
    Used a Dell customized ESXi 4.1 installer CD obtained from Dell’s website (for the onboard SATA controller, apparently)

    Threw in a Dell 24″ LED monitor for like $230.

    I’ve connected it all up to a Juniper EX2200 L2/L3 switch and a Juniper SRX-100 firewall. I’m using a Linksys NSS6000 with 3x250GB SATA drives in it – (it had 4, but one was dropping out of the RAIDSET repeatedly, so I pulled it and did a RAID-5 without a spare… I backup my images externally, so I’m OK with it… I’m MUCH rather have something along the lines of a Netapp FAS2020 for storage, but those are harder to get than hand-me-down Linksys NAS gear… 😉 At least it does NFS.

    So far, I have been able to test out DRS and HA, vMotion, and am running a small VMware View 4.5 deployment with some Thinapps deployed out to a mix of Win7 and XP vm. Even got a USB footpedal for transcription working with the Samsung NC240 running 3.3 f/w.

    Maybe not a “cheap” home lab, but it’s working well with no issues. Sometimes time is the most expensive part. That’s one reason I avoid building whiteboxes any more. Too much time for the money it’s making me… Let Dell do it cheaper!

    HTH

    wes

  7. Hi – interested in what you have done. Thanks for the work. I am pricing out what you have listed and I am unable to match your $750 cost. Can you list source for your parts? Thanks

  8. Why did you decide to not go with the Core i7 procs? Doesn’t the nehalem series support both FT and VMdirectpath?

    1. It was a combination of things. I really liked the X8SIL-F motherboard with IPMI and onboard Intel NICs so that drove a lot of it. A lot of desktop boards don’t have BIOS support for VT-d and I didn’t want to play that game. The i7 build is cheaper, for sure…especially since the X8SIL requires ECC RAM so I wouldn’t fault anyone for doing that.

  9. Nice Post, thanks for that . thinking of doing something similar except that i want to go with the X8SIE-LN4F Motherboard, ATX and 4 nics. Got 2 spare ATX cases here;-)
    Only important difference that i see is the SuperI/O thing
    Nuvoton W83627DHG UBE versus Winbond 83627DHG-P.
    Would my choice being a problem you think ?

  10. Hey I saw you used (and linked to) my blog wolfplusplus for some info on your esxi setup. I’m glad some of my info was helpful, but my name is not Chris 🙂

    Also probably too late now, but I didn’t use ECC and it works just fine.

    I just revisited my setup and ended up repurposing it for a esxi + solaris all-in-one box. I’ll be doing a post on that soon, but here are is the basic info I followed if you’re interested here.

    -Nelson

  11. Oh also you said the X3440 doesn’t support vt-d, but that is what I used for my setup and I am successfully doing pass through of a SAS card which requires vt-d.

  12. Hey Jason,

    Thanks for the guide… I am about to take a shot at this build with little VMWare implementation experience. I know I will learn a lot, your blog has been invaluable.

  13. Hi Jason,

    Thank you for your blog, you just open my crack head to build my first lab environment. Unfortunately my resources are limited since I am living outside America, I might consider some alternative specs for my servers and NAS, but many thanks to you again, cause I have now a reference what to look for an ideal environment. More power to you! looking forward to read more of your works. 🙂

  14. Hi.
    I’ve noticed that this build post, like similar ones on the net are a few months old or even longer. Given today’s updated hardware and vSphere 5.x requirements – can you suggest or post an updated listing of equipment? I have 2 hosts I manage at work but need/want to build a home-lab to work/play with and increase my skill sets. I still need to keep the costs down, but have started with purchasing a Synology 1812+ for my nas/iscsi-san box. I know I can’t afford to duplicate my 2xhost cisco c210 units and san @ home – but if I could stay around the $1000 mark for each host – it’d be a great start. Depending on memory costs, I’d ideally like to have up to 32K ram in each box.

    thanks,

    mark

    1. Yeah. I need to do a more modern build list and it’s something I’ve been thinking of doing. The only thing I’ve had to do to my lab is bump the RAM to 32GB per host, just because I keep piling more in it.

      1. so .. If I’m gonna start putting a list together in the next week or so, do you think that utilizing the same parts that you originally used would be a good fit – or should i wait and keep searching the different forums for a more updated equipment list? I’ve seen some nice builds, using older equipment but haven’t (successfully) found any more updated ‘hardware’ out there. I love the ability to use a micro-atx type case, and usb-boot or ssddrives for the vm platform to keep things ‘quiet’ and fast – as well as small. If you think you’ll be posting a more updated hardware equipment list sometime ‘soon’ – I’ll wait to see what it contains before I ‘jump’. I was hoping to purchase the ‘lab’ before the March madness tour – but might have to keep searching around. As I’ve not an expert, and have had alot of trouble trying to use the vmware/hcl listings – I have to rely on you kind folkes to ‘frame the puzzle’ for me. ha ha . I did though, find this nice listing by one or your fellow workers (http://www.p2vme.com/2012/09/home-lab-great-resource.html?showComment=1361286927548#c4224704754872224117) and it seems pretty well thought out as well. I haven’t seen any updates, post-installation on his build – so not exactly sure how it’s ended up ‘meeting his expectations’ in the end.

        mark

Leave a Reply

Your email address will not be published. Required fields are marked *