HomeLab Part 4: Summary and Final decision
This is now the forth part of the design process and now you can imagine why it took me so long to choose an adequate design for my new homelab. All of the presented design have there advantages and disadvantages and it was my challenge to find the design which fits my requirements. Let’s take a quick overview over all the designs I have made.
Overview
Server | Type | Memory | Cost |
---|---|---|---|
Intel NUC | Low Power Design | 16 GB | 650 EUR |
HP ProLiant MicroServer G8 | Low Power Design | 16 GB | 603 EUR |
HP ProLiant ML10 | Low Power Design | 32 GB | 790 EUR |
Whitebox AsRock C2750D4I | Low Power Design | 32 GB | 1057 EUR |
Whitebox Supermicro A1SAi-2750F | Low Power Design | 32 GB | 1076 EUR |
HP ProLiant ML310e G8 v2 | Medium Design | 32 GB | 1038 EUR |
Dell PowerEdge T110 II | Medium Design | 32 GB | 890 EUR |
IBM x3100 M4 | Medium Design | 32 GB | 1027 EUR |
Whitebox Supermicro X9SRH-7TF | High Design | 64 GB | 1816 EUR |
Whitebox Supermicro X9SRH-7TF | Final Design | 80 GB | 1516 EUR |
As you can see in this overview, it also includes my Final Design for my new homelab and it’s 300€ cheaper than the High End build. How can this be someone would wonder. Here you will see my component list.
Component | Type | Cost |
---|---|---|
Server | Whitebox | |
Mainboard | Supermicro X9SRH-7TF | 490 EUR |
CPU | Intel Xeon E5-2620 v2 6x 2.1 GHz | 380 EUR |
CPU Cooler | Noctua NH-U12DX i4 | 67 EUR |
Memory | Samsung 16GB M393B2G70QH0-CK0 (2x 150 EUR) | 300 EUR |
Memory | Hynix 8GB (6x) | free |
SSD | Samsung SSD 840 EVO 250GB, 2.5", SATA 6Gb/s | 111 EUR |
SSD | Samsung SSD 840, 120GB, 2.5", SATA 6Gb/s Crucial M4, 128GB, 2.5'', SATA 6Gb/s | free (pre-owned) |
HDD | 2x Hitachi HUA721010KLA330 1TB (Year 2008) | free |
Power Supply | Enermax Revolution X't 530W ATX 2.4 | 78 EUR |
Case | Fractal Design Define R4 Black Pearl | 90 EUR |
Misc. | Intel Quad-port NIC | free |
Price per Server | 1516 EUR ~1263 EUR w/o tax |
With this build I expanded a little bit my requirements about price and power consumption in favour of performance and memory capacity. Following I will explain how I saved so much money compared to the original High End build. Finally I bought 2 of these beasts.
Memory
Thanks to my grey eminence, all of my 8GB RDIMMs was sponsored by him. Thanks again for supporting me my friend. But 64GB was to less in the end so I decided with the money I saved on memory to buy 4 additional 16GB RDIMMs (2 per host) raising my memory per host to 80GB.
SSD
I already had 2 “old” SSD laying around so I used these for PernixData testing and the new 250GB Samsung 840 EVO for other testing purposes like placing VMDK of my nested VSAN cluster on it.
HDD
A previous customer decommissioned an old IBM XIV Gen1 and told me I can take whatever I want from the boxes. So I took some of the installed 1TB disks from Hitachi and used it as local datastores for my HP StoreVirtual VSA. The only problem with such old disks is that they are using 9-10 Watts when idle. In the beginning of my lab I put 4 of these disks in my hosts which means approx. 40 Watts more per hosts which was/is not acceptable for me. I reduced the amount after a short period to 2 disks but it’s still too much.
Currently I’m evaluating different types of disks ranging from WD Red over WD Purple to WD Se or similar Seagate versions. But the problem is that you don’t get 7200 RPM drives with low idle/active Watt values. On the other hand 5900 RPM drives are power savers but you don’t get much IOPS performance from it. What a dilemma!
Another idea is to use SSDs but at the moment 512GB or even 1TB are too expensive for me.
Networking
Because I have no budget left for a 10Gbit Switch both onboard NICs are directly connected for vMotion and VSAN traffic. That’s the reason I put an additional quad port NIC into both hosts. These NICs were also from the old IBM XIV and for free.
I recently updated my network stack from a SG200-26 to a SG300-52 because I was running out of ports and a Layer 3 switch would be a cool new device to play with! Thanks to Chris Wahl and his awesome post about “New network design for the lab”, I’m also in the middle of a network redesign incl. the complete AD domain. This and the post about the SG300 was one reason I bought the SG300.
I hope you enjoyed my design process series so far. In the next parts I will show you some problems I encounter when creating my new lab and where my new lab is now “stored”. So stay tuned.
If you have any comments please drop me one or send me a tweet.
That’s going to be a nice (home) lab server, very curious what the actual power consumption will be, when it is up & running!
I will post it together with all the other lab stuff, but currently I’m in a migration process that removes maybe some of the power hungry parts! 🙂