Table of Contents
Introduction:
Hello everyone and welcome to the channel! In today's article, I will be discussing the hardware I chose for my all-in-one home NAS server. If you missed my last episode where I talked about planning and explained how I decided what features I needed from my home NAS. You can find it here: https://addictedtotech.net/planning-my-all-in-one-nas-how-i-decided-what-features-i-need/
Over the next few weeks and months, I'll be walking you through the exact process I followed when planning, building and troubleshooting this system. My goal is to share every single step with you, warts and all – the good, the bad, and the ugly – so you can hopefully benefit from my experience and find answers to your own home NAS questions.
Rather watch the YouTube Video?. Here it is:
Just so you know, this system is already built and running in full production. And I have to say, it really turned out to be a solid home NAS server build.

I managed to achieve all my initial goals and then some! Building this server has been a hugely rewarding process, and for me, that learning experience is truly the best part of building your own home lab.
I've linked all the hardware I discuss today, along with some good alternatives at the bottom of this article. For full transparency: some of these are affiliate links. They won't cost you a penny extra, but I might get a small commission, which helps support me so I can keep making these videos for you. So, a big thank you for your support!
Alright, if you're ready, let's get into it!
What Hardware Did I Choose?
Now that all my planning was done, it was time to select the actual hardware. Here's what I settled on after extensive research. I'll break down each component and give you my full reasons for choosing it – hopefully, this helps you if you're planning your own home server build.

Case: Jonsbo N3 Mini-ITX
Let's start with the case. You might be surprised, but the Jonsbo N3 Mini-ITX Case was actually the deciding factor that kicked off this entire project!.

Recommended
Alternative
I'd wanted to build my own DIY NAS for years, but what always held me back was the lack of suitable NAS cases. When I saw the sleek, compact design of the Jonsbo N3, I knew immediately it was time. With its Mini-ITX form factor and 8 drive bays, this was the case I'd been dreaming about. With this case in mind, I started planning all the other hardware that would go inside it.
Power Supply (PSU): Cooler Master V750 SFX GOLD
Next up, the power supply. Because I chose the Jonsbo N3 case, I knew my only option was an SFX power supply. I also wanted it to be modular to avoid excess cables blocking airflow. Finally, I aimed for at least an 80+ Gold rating for energy efficiency and reduced noise.
Recommended
Cooler Master V750 SFX Gold ITX SFF Modular Power Supply, 750W 80 Plus Gold
Alternative
Cooler Master V850 SFX Gold ITX SFF Modular Power Supply, 850W 80 Plus Gold
I settled on the Cooler Master V750 SFX GOLD 750W. Yes, 750 watts is definitely overkill for this build, but it was surprisingly the cheapest PSU I could find at the time. A nice bonus is that the extra wattage leaves plenty of headroom if I ever want to repurpose this system and for example add a graphics card later on then i can.

I also realised I'd need an additional PSU to Molex cable to power the Jonsbo's 8-bay SATA backplane, as it requires two, and Cooler Master only supplies one which doesn't stretch to power both power ports. I easily found this extra cable on eBay, and it works great.

CPU: Intel Core i5-13500
Next up I needed to chose a CPU. Should I go with Intel or AMD? Since I planned on Plex transcoding and wasn't installing a dedicated graphics card, that pretty much answered the question for me.
I definitely wanted to leverage Intel Quick Sync, as it significantly speeds up Plex transcodes compared to software-only solutions, and it's far more energy-efficient. So, an Intel CPU was the plan from the start.
Next, I considered core and thread count. My planning indicated I'd need at least 2 to 3 virtual machines initially. I also factored in future-proofing to ensure ample resources for testing new technologies and spinning up more VMs and containers. With all that in mind, I chose the Intel Core i5-13500 (2.5 GHz, 14-core, 20-thread CPU), which comes with 6 performance cores and 8 efficiency cores.

Another key factor was its lower base TDP of 64 watts, making it more energy-efficient than other CPUs in a similar price range. Some argue that TDP isn't a huge deal, but for an always-on server, I believe a lower base TDP is better, and my testing (which I'll cover in a later episode, so stay tuned!) definitely shows this.
Recommended
Intel® Core™ i5-13500 Desktop Processor 14 cores (6 P-cores + 8 E-cores)
Alternative
Intel Core i5-14500 Desktop Processor 14 cores (6 P-cores + 8 E-cores)
A Word of Caution on Intel Hybrid Architecture (P-cores & E-cores)
Intel's 12th-generation CPUs and newer introduced a hybrid architecture with a mix of high-performance P-cores and energy-efficient E-cores. While designed for optimized power consumption and overall performance, this architecture can introduce extra considerations when running hypervisors like Proxmox VE.

Proxmox VE does support Intel's hybrid layout, with better compatibility on modern Linux kernels (version 6.1 and newer generally work best).
However, Proxmox VE doesn't inherently differentiate between P-cores and E-cores for automated scheduling or workload isolation. This means that, by default, background tasks might not intelligently route to E-cores, nor will resource-intensive VMs or passthrough devices automatically get prioritized to P-cores.
In some real-world scenarios, particularly with VFIO passthrough (like passing through an HBA card for SATA drives), users – myself included – have reported system instabilities, including frequent crashes and kernel panics. These issues can stem from multi-core scheduling challenges, potential CPU affinity conflicts, or general passthrough-related problems.
A common and effective solution for these instabilities, especially in VFIO passthrough situations, is CPU pinning.

This involves manually assigning specific CPU cores to the Proxmox operating system or individual VMs. For example, dedicating P-cores to the Proxmox host can resolve issues where the host and VMs might otherwise contend for the same CPU resources, leading to scheduler race conditions.
Therefore, if you want to potentially avoid these complexities when building a new Proxmox VE server, you might consider an Intel CPU from the 11th generation or earlier, which don't have the hybrid P-core/E-core design.

Just so you know, I'll be covering the specific issues I had with my hybrid processor and how I resolved them through CPU pinning in detail in future articles. So be sure to check them out.
CPU Cooler: Noctua NH-L9x65

Next up, I needed a CPU cooler that was both silent and had a low profile. The Noctua NH-L9x65 did not disappoint! Its lower profile (being closer to the CPU) left plenty of room for airflow, and the fan is completely silent – a definite win-win.
Recommended
Noctua NH-L9x65 chromax.Black, Premium Low-Profile CPU Cooler (65mm, Black)
Alternative
Noctua NH-L9x65, Premium Low-Profile CPU Cooler (65mm, Brown)
Motherboard: ASRock Z790M-ITX WIFI
When I was looking for a motherboard, there were a few limiting factors I had to consider due to my case choice.
First, the form factor. The Jonsbo N3 meant I needed a Mini-ITX board. However, these boards come with certain limitations. For starters, there aren't as many Mini-ITX options as full ATX boards. They also often have limited features due to their compact size, for instance, only two DIMM slots or just one PCIe lane.

Recommended
ASROCK Z790M ITX WIFI, LGA1700, MINI ITX, 2 X DDR5, 2 M.2, 4 SATA, HDMI, DP
Alternative
Solving the SATA Problem
But the main problem I ran into was the lack of SATA connectors. Knowing I wanted an 8-bay NAS, I could only find boards with a maximum of four SATA ports at the time of my research. I knew right then I'd need to find a solution, and I did – it's amazing! More on that later in this episode.
What I knew for certain at this point was that I needed:
- A Mini-ITX form factor motherboard.
- Full DDR5 DIMM slots (more on RAM later).
- A PCIe slot for expansion, preferably Gen 5.
- Wi-Fi 6, just in case I ever needed to move the server and use Wi-Fi.
- USB 3.0 ports for transferring data from my old server drives.
- Finally, dual Ethernet with the ability to upgrade to 2.5G in the future. My current network is only 1Gbps, so this was purely for future-proofing. I don't plan to edit video on remote shares or anything that high-bandwidth; I mainly use Ethernet for incremental device backups and video streaming. So, 10Gbps would have been overkill for my specific use case. If you are planning heavy video editing or similar, then definitely look for a 10Gbps board or a PCIe/M.2 adapter that supports it.
I finally settled on the ASRock Z790M-ITX WIFI as it ticked all the right boxes.
RAM: Corsair Vengeance 64 GB (2x 32 GB) DDR5
Initially, I was planning this system with TrueNAS in mind, intending to use the robust ZFS filesystem. The way ZFS works, utilizing its ARC cache, demands a lot of RAM – a lot. The general industry consensus suggests 1GB of RAM for every 1TB of raw data storage. There's also a recommended requirement for ECC (Error-Correcting Code) memory.

Now, from my experience working in the domestic/small business IT industry, these recommendations are truly aimed at enterprise or critical data use cases. Implementing the same conditions in a home lab is probably overkill. A random bit flip isn't unheard of, but it's quite rare. So, is it really worth the extra price for a home setup?
There are thousands of home lab admins out there right now using standard, non-ECC RAM, often with far less RAM than recommended for ZFS. I've personally seen setups in production using as little as 16GB of RAM on 32TB or even 64TB ZFS data servers, working flawlessly (albeit somewhat slower than a more robust system). So, if you have the budget and want that extra peace of mind, then absolutely go for ECC.
I'd also like to add that DDR5 RAM does come with some benefits over its predecessors, as DDR5 features on-die ECC. This can correct single-bit errors within individual memory chips during read/write operations. However, while complementary, on-die ECC cannot replace traditional ECC memory that works in coordination with the CPU. So, please bear that in mind.
As I continued down the ZFS rabbit hole, for my setup I soon realized I'd need to buy at least four hard drives upfront just to create my first 4 drive Vdev. This was completely unrealistic and conflicted with my overall server goal of growing my storage pool as needed. So, I decided that even though ZFS and its features are superior, at this time it wasn't an option for me and my budget.
Ultimately, I settled on the cheapest RAM I could find that met my needs which was the Corsair Vengeance 64 GB (2x 32 GB) DDR5-5200 CL40. This gives me a maximum of 64GB of system RAM, which fits perfectly within my system goals.

Recommended
Primary OS Storage: Samsung 990 EVO Plus 2TB M.2 NVMe
Next I was to look at the primary OS system storage.
My plan is to use this system as a bare-metal hypervisor, so I wanted the base OS installed on fast NVMe storage. I'll also be using this NVMe OS storage to passthrough virtual disks for my VMs, so I needed a large capacity.

I opted for the Samsung 990 EVO Plus 2TB M.2 NVMe. I initially considered putting this in a mirrored RAID, but I decided against it to keep the system cost down. You might want to look into mirroring your base os if your budget allows.
Recommended
Alternative
Samsung 990 Pro 2 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive
PCIe HBA (Host Bus Adapter) Card: LSI SAS 9300-16i
Before we talk about NAS data storage, we need to address that crucial lack of SATA ports on the motherboard.
The solution I found was to use a Host Bus Adapter (HBA) PCIe card. I chose the LSI SAS 9300-16i 12 Gb/s SAS Controller. Mine came with IT mode enabled, though the firmware and BIOS version were quite old (more on that later).

This model supports up to 16 SATA connections, which is major overkill for my system, but I had a few reasons for picking this card:
The first reason is that it uses two separate SAS 9300-8i controllers. This is great because if I ever want to run TrueNAS and Unraid on the same server in the future, I can passthrough each controller to separate VMs.
The second reason was its expandability. Having additional SATA connectors gives me the option to add cache drives or more SATA drives to my server down the line.
My final reason for going with the 9300 over, say, the 9305, was the price and availability.
If you're looking for an HBA card, you might consider the 9305 as it has a single 16i controller and runs much cooler than the 9300 (more on heat issues shortly). However, it's significantly more expensive and harder to find.

You'll also need to make sure that if you buy one of these cards, you either ask the seller to pre-flash IT mode and the latest firmware onto it,

I have left affiliate links below to HBA cards that should come pre-flashed with IT mode and the latest firmware, but there's never a guarantee, so always reach out to the seller beforehand to confirm.
I only recommend using LSI HBA cards as they work exceptionally well in NAS systems. As I mentioned, these cards often come with separate controllers, making it easy to passthrough sections of the card from your hypervisor to your NAS OS virtual machine.
A word of caution here: for stability, you'll need to use VFIO passthrough, which I'll cover in depth in a later article, so stay tuned for that.
Recommended
LSI SAS 9300-16I 16 Port Controller HBA Card with PCIe Slot, 12Gb/s SAS
Alternative
SAS9305-16i LSI 9305-16i SATA SAS 12Gbs RAID Controller Host Bus Adapter
Additional Cables
Because I was using a SAS HBA card, I needed to pick up some additional cables. I purchased two internal HD Mini SAS (SFF-8643 Host) to 4x SATA Hard Drive Cables, both 100CM long.

This gave me plenty of room for connecting the HBA to the Jonsbo N3 backplane. Each port on the HBA card can connect up to four SATA drives, so you'll only need two of these cables for eight drives. Affiliate link to these cables are below.
Heat Issues:
Now before we move on. HBA cards are typically used in data center environments with very high airflow. Running my 9300 in a compact case with limited airflow meant the HBA card was getting very hot. I knew about this before I bought the card and already had a plan to resolve it.
What further compounds the heat problem is that these cards come with no native way to add a fan cooler. Some users online have used cable ties to attach fans directly to the heatsink on the card. Personally, that method never sat well with me, as the heat from these cards can get so intense that it could potentially melt attached fans or cable ties, leading to bigger problems down the line.
I managed to resolve this issue quite easily by attaching a fan directly to the inside of the Jonsbo N3 case, leaving a nice inch gap between the card and the fan.

I'll cover this in a later YouTube video where I demonstrate the full installation build tutorial for this system, so be sure to subscribe to the channel so you don't miss that.
With the lack of SATA ports now resolved, it was time to move on to data storage.
NAS Data Storage: My Choices

As I mentioned previously, I was originally looking to use ZFS as my NAS storage filesystem. However, this didn't fit with my budget or my server goals, as I would have had to buy my full pool of drives upfront. So, I knew I needed to pick a filesystem that could grow with my needs. This falls into the OS software side of things, which I'll discuss in more detail in the next video, so go check that one out if you want to know what I ended up using.
Knowing I could add drives as needed and thus grow my pool with demand, I started looking into purchasing some 3.5-inch SATA drives. The two brands I considered were Western Digital Red drives and Seagate IronWolf drives, as well as their Pro counterparts. I aimed to start my pool with at least two drives, with the data drive being at least 12TB.
Recommended
Alternative
Seagate IronWolf Pro 12TB NAS 3.5" Internal Hard Drive - 7200 RPM
I also needed a parity drive, which had to be larger than my chosen storage drives. I was fortunate enough to have an external USB 20TB Western Digital drive which I shucked (removed from its plastic case). I attached it to the NAS, and it worked out of the box with no extra tweaks. Some people online have reported needing to use electrical tape to cover the first three pins due to a power issue on standard motherboards, but my Jonsbo N3 case, HBA, and motherboard combo had no issues at all.
Recommended
WD 20TB Elements External Hard Drive, Desktop HDD storage, USB 3.0 compatible
This gave me 12TB of instant storage and a 20TB parity drive. It also meant that as I need extra storage, I can continue to purchase drives and add them as and when I need to, which ticked all my server goals.

I'll explain more about all this in our next video that covers the software, so if this interests you, please hit that like and subscribe button, as well as the notification bell, to be notified when we upload new content.
Conclusion
So, that brings us to the end of today's article!
I hope you got some great benefit out of it. Why not share your own experience and the hardware you chose in the comment section below? I look forward to hearing about it!
In the next episode, we'll explore the software powering this NAS build. I'll be detailing the operating systems, applications, and services essential for achieving my goals. So stay tuned for that.
Don't forget, I have a Discord channel that I'm building up. It's very early days, so be one of the first to join! I hope to build a community there, run competitions, and get your feedback on future content ideas. It's also a good place to get any help or support – just drop a question in the correct section, and I'll do my best to answer it.

All that's left for me to say now is thank you all for your time, and I'll see you in the next one!


0 Comments