HomeLab Part 4: Summary and Final decision

This is now the forth part of the design process and now you can imagine why it took me so long to choose an adequate design for my new homelab. All of the presented design have there advantages and disadvantages and it was my challenge to find the design which fits my requirements. Let’s take a quick overview over all the designs I have made.

Overview

ServerTypeMemoryCost
Whitebox Supermicro X9SRH-7TFFinal Design80 GB1516 EUR
Intel NUCLow Power Design16 GB650 EUR
HP ProLiant MicroServer G8Low Power Design16 GB603 EUR
HP ProLiant ML10Low Power Design32 GB790 EUR
Whitebox AsRock C2750D4ILow Power Design32 GB1057 EUR
Whitebox Supermicro A1SAi-2750FLow Power Design32 GB1076 EUR
HP ProLiant ML310e G8 v2Medium Design32 GB1038 EUR
Dell PowerEdge T110 IIMedium Design32 GB890 EUR
IBM x3100 M4Medium Design32 GB1027 EUR
Whitebox Supermicro X9SRH-7TFHigh Design64 GB1816 EUR

As you can see in this overview, it also includes my Final Design for my new homelab and it’s 300€ cheaper than the High End build. How can this be someone would wonder. Here you will see my component list.

ComponentTypeCost
Price per Server1516 EUR
~1263 EUR w/o tax
ServerWhitebox
MainboardSupermicro X9SRH-7TF490 EUR
CPUIntel Xeon E5-2620 v2 6x 2.1 GHz380 EUR
CPU CoolerNoctua NH-U12DX i467 EUR
MemorySamsung 16GB M393B2G70QH0-CK0 (2x 150 EUR)300 EUR
MemoryHynix 8GB (6x)free
SSDSamsung SSD 840 EVO 250GB, 2.5", SATA 6Gb/s111 EUR
SSDSamsung SSD 840, 120GB, 2.5", SATA 6Gb/s
Crucial M4, 128GB, 2.5'', SATA 6Gb/s
free (pre-owned)
HDD2x Hitachi HUA721010KLA330 1TB (Year 2008)free
Power SupplyEnermax Revolution X't 530W ATX 2.478 EUR
CaseFractal Design Define R4 Black Pearl90 EUR
Misc.Intel Quad-port NICfree

With this build I expanded a little bit my requirements about price and power consumption in favour of performance and memory capacity. Following I will explain how I saved so much money compared to the original High End build. Finally I bought 2 of these beasts.

Memory

Thanks to my grey eminence, all of my 8GB RDIMMs was sponsored by him. Thanks again for supporting me my friend. But 64GB was to less in the end so I decided with the money I saved on memory to buy 4 additional 16GB RDIMMs (2 per host) raising my memory per host to 80GB.

SSD

I already had 2 “old” SSD laying around so I used these for PernixData testing and the new 250GB Samsung 840 EVO for other testing purposes like placing VMDK of my nested VSAN cluster on it.

HDD

A previous customer  decommissioned an old IBM XIV Gen1 and told me I can take whatever I want from the boxes. So I took some of the installed 1TB disks from Hitachi  and used it as local datastores for my HP StoreVirtual VSA. The only problem with such old disks is that they are using 9-10 Watts when idle. In the beginning of my lab I put 4 of these disks in my hosts which means approx. 40 Watts more per hosts which was/is not acceptable for me. I reduced the amount after a short period to 2 disks but it’s still too much.

Currently I’m evaluating different types of disks ranging from WD Red over WD Purple to WD Se or similar Seagate versions. But the problem is that you don’t get 7200 RPM drives with low idle/active Watt values. On the other hand 5900 RPM drives are power savers but you don’t get much IOPS performance from it. What a dilemma!

Another idea is to use SSDs but at the moment 512GB or even 1TB are too expensive for me.

Networking

Because I have no budget left for a 10Gbit Switch both onboard NICs are directly connected for vMotion and VSAN traffic. That’s the reason I put an additional quad port NIC into both hosts. These NICs were also from the old IBM XIV and for free.

I recently updated my network stack from a SG200-26 to a SG300-52 because I was running out of ports and a Layer 3 switch would be a cool new device to play with! Thanks to Chris Wahl and his awesome post about “New network design for the lab”, I’m also in the middle of a network redesign incl. the complete AD domain. This and the post about the SG300 was one reason I bought the SG300.

I hope you enjoyed my design process series so far. In the next parts I will show you some problems I encounter when creating my new lab and where my new lab is now “stored”. So stay tuned.

If you have any comments please drop me one or send me a tweet.

2 Responses

  1. Rob says:

    That’s going to be a nice (home) lab server, very curious what the actual power consumption will be, when it is up & running!

Leave a Reply

Your email address will not be published. Required fields are marked *

*