We’ve covered the networking and storage components of my home vSphere lab build and now it’s time to talk servers. This is where things get tricky and you really do need to pay attention and make smart choices else you’ll spend more money than you should and/or not be able to take advantage of all the features in vSphere that you want. This was the biggest surprise I had when building this lab. I expected to grab some cheap, small i7 type servers and load them up with cheap RAM. That didn’t really happen…
As I’ve done in other sections the first thing to do is outline my requirements:
My requirements are pretty straight forward. The problem is that vSphere does not have an extensive Hardware Compatibility List (HCL). It’s not like Windows. This really becomes a problem around storage and networking. vSphere has no support for software RAID that is very common on motherboards these days so if you want to build a system and use internal RAID storage expect to buy a supported hardware RAID card. LSI boards are very common for this purpose. Driver support for NICs can also be a problem though there are some ways to add drivers for those not supported in the standard installation. It just depends if you want that hassle and if you want to deal with it again as upgrades are released. A good place to start is here at the ESX/ESXi 4.0 Whitebox HCL. The HCL lists are sometimes a little dated so check the forums for more information.
The point is that you need to plan your build accordingly and really look at what chipsets a motherboard uses. If you don’t care about the onboard NICs and storage controller your options open up. You can always add PCIe NICs to the system, though this can often eat up any money you’d save by going with a cheaper board over one that’s completely supported.
An easy way to get around all of this is to buy a server that is on the vSphere HCL. The difficulty is finding one that is cheap enough and not built to run in a data center (fans and power usage). There are some good options out there. HP has the ML110 line and Dell has the T110 line. If one of those fits your needs and budget it’s probably the easiest solution. The downside is that you probably will need to add one or more NICs to them and if you want a faster CPU the price goes up far more than the actual CPU bump would be buying the chips yourself. That’s the main reason I did not go with one of the prebuilt options. To go from a X3430 CPU (non-hyperthreading) to the X3440 (with hyperthreading) on the HP was like $100. Add one or more NIC cards (HP server has one NIC by default) and you’re in for another $100. It adds up.
The interesting thing about my build is that I ended up very close to several other whitebox builds out there. I think it shows that great minds think alike and, more likely, the vSphere HCL limits your options. Phil’s Baby Dragon (nice local storage) is here and Chris’ configuration is here. Here is the breakdown. Each of my two servers has:
Let’s go over some detailed information…
Like I said before the key to a good motherboard is the chipset support for NICs and storage as well as the right support for the hardware virtualization options, mainly vt-d. The X8SIL-F uses the Intel 3420 chipset which is a very solid chipset that supports everything I want. The only downside is that it also requires ECC RAM. There is no option to use non-ECC memory, which drives the price of memory up. But that was a compromise I was willing to make for what else it has. The two onboard NICs are Intel 82574L and are on the vSphere HCL. This board also has a third dedicated NIC for IPMI 2.0 management. This is very useful. I want full vSphere support so Dynamic Power Management (DPM) can use IPMI for server shutdown/wake. The IPMI functionality also gives you full remote KVM access to the server so you can run it headless. Installing ESXi is very simple. Just remote in, mount the ISO via Virtual Storage, and go. Here is a screenshot showing the remote display.
If you want to put a display on the server you can as the X8SIL has onboard Matrox video. The great thing about this board is that it just works. No special oem.tgz file for drivers, no custom installs…it just goes. Pop the disc/ISO in, tell it to install to the USB thumb drive, and you’re done. Simple and fully supported. The price isn’t bad either at like $189. Again, take the two fully supported NICs in to account.
One quick word of warning. If you decide to use a different board with a chipset that does all the cool Intel vt features, make sure that the BIOS also allows them. There are a lot of consumer/desktop boards out there that don’t expose those features in the BIOS and you can’t enable them. Without that they are useless. Also, upgrade the BIOS on the X8SIL-F to the latest and then do the same for the IPMI BIOS. Make sure the NIC configuration in the IPMI interface is set to “Dedicated”.
Finally, I want to put in a good word for Supermicro’s customer service. I ordered all of my equipment from NewEgg and while NewEgg has great prices (usually) their customer support is iffy. In fact, I paid the rush fee on my second server order so it would go out that day via UPS 3-day. Everything went out that day via 3-day UPS…except for the CPU. It went out Ground from California. I live in North Carolina…it took a while. Anyway… When I was finally building my second server I found that it would only recognize the first DIMM. The second DIMM slot was bad. Supermicro offers overnight cross-shipping as standard on their motherboards. I just submitted a form with my information and the boards serial and that was it. In two days I had a replacement board. Very, very pleased with that level of service.
Choosing a CPU isn’t too bad. Pick a price point and/or speed. Decide how many cores you want and if you want it to support hyperthreading. If you choose a different board be aware that some systems with onboard video require a CPU that supports onboard video. Without that you’ll need to add a dedicated video card of some sort. Also, some CPUs are listed as having vt-d support, which is required for VMDirectPath. I hear conflicting things on this. The vt-d feature should be a chipset feature but it appears that some CPUs don’t support it and therefore it won’t work. Be careful there as it gets even more confusing since some Intel i5s did vt-d and now don’t. Originally I was going to get the X3430 CPU but decided I wanted hyperthreading so I moved to the X3440. Then I saw it was only like $20 difference up to the X3450 so I settled there. The X3450 is faster and has more potential for “Turboing” up to higher frequencies. This is a 95w TDP (Thermal Design Power). For lower power consumption there are some L34xx CPUs but they are dual-core. There are also some 135w CPUs out there but I thought the x3450 was the right compromise.
To cool the CPU I’m using the Intel OEM cooler. It’s VERY quiet (silent really) and does a great job. There is no need to go with a larger, more expensive cooler.
The choice of memory was really driven by the motherboard’s chipset. The Intel 3420 chipset requires ECC RAM and can be a bit picky on the configuration so check the information on Supermicro’s site. The link in the breakdown above goes to the Kingston compatibility results for the X8SIL-F motherboard. There are a lot of options there. I chose two of the KVR1333D3E9S/4G kits for each system. Reasonably priced with a solid warranty. Before buying this RAM I confirmed its compatibility with a few other people who run the same combination. I went with 2x4GB to leave me room later. Originally I wanted 16GB of RAM in each system but due to the extra cost of ECC (like 30% additional) I am running on 8GB each for a while. You can do a surprising amount with just 8GB per system so it hasn’t been an issue yet.
If you look at Supermicro’s memory guide they list Hynix memory. All of my Kingston sticks use Hynix chips. One odd thing is that I ordered my server parts separately, meaning I ordered one server’s worth of parts to test and then the second once I was satisfied. My second set of memory kits look different than the first set. At first I was pretty concerned but after examination the DRAM chips were the exact same. The only real difference was the size of the PCB they are mounted on and they work great.
There are tons of options for cases out there. Just pick one you like in the price range you want. Make sure it fits the board size that you use and if you do internal storage confirm the number of drive bays. I chose the Lian Li V351B due to reputation and known quality of Lian Li’s cases. The V351 has a removable motherboard tray which makes installing the board easy. The side panels come off easily allowing you to do any cabling with the board in place. It does not come with a power supply so you need to add that. If I did this again I’d probably choose a smaller case, if I could find one. I’m not sure I could since the only way to save space would be a case without any drive bays and those are usually HTPC cases. I’m not sure an HTPC case would provide adequate airflow for the CPU.
There are a few notes to keep in mind when putting the X8SIL-F in this case. If you read Phillip’s overview you’ll see he mentions replacing the fans in the case. That’s an option but not required. The fans in the V351 are 3-pin fans and the X8SIL wants 4-pin fans. Luckily, the X8SIL has backward compatibility support for 3-pin fans on 3 of the fan connections. I used one for the CPU cooler and two for the large 120mm fans in the front of the case. The rear fan was just connected to a normal 5v connection on the power supply (adapters included with the case).
As mentioned here in Chris’ writeup the V351’s power LED is 3-pin and the X8SIL wants a 2-pin. That’s an easy fix…just move one of the pins over to the middle spot on the connector and then you get the nice pretty blue LED on the front of the case. While not a big deal it’s nice to easily see if a server was put in standby by DPM.
The only real downside to this case is the price. You can get another case w/ power supply for the same or less. Keep in mind though that you don’t get to choose your power supply when you buy a combo like that. I’ve built computers for years and understand the quality differences between cases. Cheap cases aren’t built well. The edges are sharp. They don’t fit right. They are hard to work with and make changes. They also will rattle if anything causes vibration. For this reason I sucked it up and paid for good cases.
Power supplies are not exciting. I just wanted something quiet, efficient, and as small as possible (so it’s efficient). The Rosewill Green series is very reasonably priced and is 80%+ efficiency at all power draws. I’d like to go higher in to the 90%s but the price goes up dramatically. The 430w is the smallest offering in Rosewill’s Green line so that’s what I went with and have been pleased. Seeing as these servers are topping out around 120w at full load I think I have head room. The fan in the power supply is VERY quiet (pretty much silent) and the unit fits no problem. What else can I say, really?
USB Thumb Drive
For the ESXi 4.1 installation I used some random 4GB thumb drives I had sitting around. The X8SIL-F has an internal USB port for just this purpose and it works great. Just boot off the ESXi ISO and install to the USB drive. Easy. One thing I didn’t think about is that my 4GB drives were old and therefore really slow. If I grab the VI Client off one of my two boxes it’ll transfer at like 200KB/s and take 20 minutes so I don’t do that anymore. Other than that they don’t matter.
The full lab has been up for a few weeks now and so far I’m very pleased with the outcome. I blew my original budget but the end result is a very robust setup that supports all of the features that vSphere has to offer. Here are some random notes…
Power usage on these servers is great. I discussed some power utilization and settings in this post here. The servers idle at 38w and peak out around 120w at full load. They are very, very quiet. The only time I’ve heard them at all is when I was stress testing them at 100%. The CPU fan kicks on full speed and emits a faint whine. At normal load they are silent. What’s funny is when I kill my torture test (using Prime95) the fans instantly spin back down and go silent.
Take some time and move your cabling out of the way. When I built my first server I didn’t bother with that at all and they blocked air flow through the case. After 2 or 3 minutes of Prime95 torture testing the system would start beeping due to thermal alarms. I moved my excess cabling up to behind the power supply. This isn’t a problem at all since the power supply fan is on the bottom of the power supply so it pulls air up from the case and then out. Perfect arrangement. Here is an after picture showing the sloppy, yet effective, clean up. I also moved some unused cables (for like audio) to the front of the case.
Again, not pretty but it’s a clear path from the front of the case to the back. Like this I can run the torture test for as long as I want without any problems.
And one more… You can’t adjust, or at least I see no where to do it, the IPMI alarm thresholds on this system. I have a ticket open with Supermicro to ask if there is a way. This causes one problem. At idle the cooling works so well that my CPU fan actually triggers the low-RPM alarm threshold. I have disabled this alarm in vCenter for now but I’d like to reactivate it if possible to watch for a failing fan. Another option would be to change the fan configuration in the motherboard BIOS. I have mine set for Balanced. I could probably raise that up and increase the minimum fan RPM but for now I’ll run alarmless.
After everything is said and done I’m very pleased with these servers. They are powerful and quiet. I originally expected to build a server for <$500 but that’s not the case here. They ended up around $750/each. You can save money in a few places. A cheaper case is a big one. A cheaper CPU (x3440 or x3430) also works, as well as some i3 chips though those don’t support Intel vt-d. The RAM I’m using is as cheap as you can go with any real certainty. So there are ways to do this for probably $600 if you give up a few minor things. This probably isn’t my final configuration. I plan to add RAM to at least one system. I may take one to 16GB and leave the other at 8GB. I figure this will give me resources but also allow me to starve one server easily to do some testing. Along with RAM I’ll probably also get one or two Intel SR-IOV capable NICs to play with as well..but for a good fully functional server these are perfect.