Like many people in IT Infrastructure-land, having access to a lab environment is extremely valuable. In my career, having access to a notable Enterprise-level infrastructure had been available for me to carve a chunk out of periodically… and it sufficed. Though, I am not in that environment any more and the needs for lab have really grown. Earlier in 2020, I hit the breaking point wherein VMware Workstation on my corporate laptop was not going to cut it any longer. Time to break out the checkbook.
But, what did I want? NUCs look cool, but feel expensive for what they are. Mac Mini, similar to that. Supermicro had a compelling size and entry point to Xeon, but without a full-on rack, I felt a little limited in future options.
It took a little while, but I got to the point where it hit me like a pile of bricks… WHY do I want a lab? Suddenly, things became more clear. I didn’t need to let popular/common options drive my decision. Rather, just like we should do with our own environments and customers, identify the use case, define the requirements, and design around those requirements. Seems silly how long it took to get to that point… but, we’re here now.
So… what did I want to do with my lab? How am I going to use it? WHY do I want a lab?
- Gain experience with vSAN storage (solution and components)
- Gain experience with GPU-based workloads (ex: running OpenAI workloads)
- Gain experience with infrastructure automation solutions (ex: vRealize Automation, Terraform, Ansible, etc…)
- Gain experience with high-speed networking
- Support activities with the TAM Lab program within the VMware TAM Services organization
- Don’t paint me into a corner. Make sure the solution is flexible enough to meet my needs, whatever they are going forward.
How would I do this?
- Leverage vSphere 7.0
- Leverage a GPU for CUDA-based workloads
- All-flash vSAN configuration
- High-speed networking between nodes
So, now that I know what I want it to do, how do I design it? Some design considerations for my homelab were:
- I wanted consistency for all of the nodes in my environment. No snowflakes (well… as much as possible).
- Expansion options available (think PCI slots, storage headers, etc…)
- Appropriate cost for the environment. Money is a design consideration, to be sure. Don’t cheap out, but understand the cost/value relationship for the components.
- vSAN eats up memory on ESXi hosts. Ensure there is enough memory available on the hosts for vSAN to operate as designed while running the proper workloads.
- Leverage low cost/freely available infrastructure services where appropriate.
- This is not because I dislike Microsoft (far from it). However, it goes back to cost of the environment. The licensing to properly run infrastructure services (DNS, certificate authority, etc…) can be overwhelming.
- My network needs really boil down to routing across multiple VLANs in the environment and Jumbo Frame support. I’m not massively interested in notable enterprise networking solutions (ex: Cisco). Fiber-based networking with copper uplink to existing network met a price and functionality point that was quite agreeable. Again… don’t over do it.
- The lab would be deployed in a basement storage area. So, heat and sound were interesting, but it would be OK if the solution put off heat and had a notable sound profile.
With all of that above, I felt confident that the solution I come up with should be able to meet my needs! So… what did I end up with?
3 Nodes. Each node with the following components:
Ryzen 5 3600
|6 core; 3.6 Ghz (boost 4.2 Ghz)|
|RAM||4||16GB (64GB in total)|
|Motherboard||1||Gigabyte X570UD||5 PCIe slots|
|Video Card||1||MSI GeForce GT 710||Used for host console functions. |
On-board video options not functional due to Ryzen architeture (need Ryzen APU for this).
|Video Card||1||Nvidia Quadro K2200||SNOWFLAKE! Only used for a single node. Not every node has a GPU.|
|Hypervisor Storage||1||Sandisk 64GB Cruzer Fit||Hypervisor installation|
|vSAN Storage||1||WD Blue 3D NAND 1TB SSD||vSAN Storage Tier|
|vSAN Storage||1||WD Blue 3D NAND 250GB M.2 2280||vSAN Cache Tier|
|10Gb Networking||1||Intel |
|2 x 10Gb SFP+|
|10Gb Networking||1||10Gtek DAC||Support for Mikrotik|
|Case||1||Fractal Design Focus Case||AXT Mid Tower|
|Power Supply||1||Corsair CV Serices CV450||Bronze Certified|
Plus a MikroTik CRS305-1G-4S+IN 5-port 10Gb SFP+ switch for the environment. This little switch includes some additional L2/L3 functions above and beyond my current use-cases that may prove to be useful in the future.
At the end of the day, I let the WHY drive the decision making process. The WHY focused my efforts towards a more intentional and purposeful implementation versus trying to fit the mold of what everyone else was doing. The WHY led me to a roll-your-own deployment that meets my needs now and allows me to pivot in the future.
Are there things I am missing? Certainly… I don’t have out of band management and IPMI. But, I am fine with those things. I have what I need for my homelab because I know WHY I have it.
If you are looking into building your own homelab environment… do youself a favor and really think about WHY you want one. If you start with the WHY, you’re going down a good path!