Introduction
The first early M.2 PCIe 5.0 client SSDs are finally showing up for sale in the retail space. This prompts the question, do you need a PCI 5.0 SSD? Spoiler alert, the quick answer is probably not, at least for most use cases.
There are some scenarios where a PCIe 5.0 SSD can make sense, which we will explore.
Can Your System Use a PCIe 5.0 SSD?
Unless you have a pretty recent vintage desktop system with a current generation CPU, the answer is probably no. There are big caveats to this, though. If you want the full PCI 5.0 bandwidth, the CPU in your system must support PCIe 5.0 and the M.2 slot that you use for the SSD on your motherboard must also support PCIe 5.0.
You can install an M.2 PCIe 5.0 SSD in a PCIe 4.0 or a PCIe 3.0 M.2 slot, in a system where the CPU does not have PCIe 5.0 support and it will still work. It will just fall back to the lowest PCIe generation that the M.2 slot or the CPU will support. This sort of defeats one of the main advantages of having a PCIe 5.0 SSD though.
If you do want to experiment with a shiny new PCIe 5.0 SSD, make sure to consult your motherboard manual. Not all M.2 slots (even in very high-end desktop motherboards) will be PCIe 5.0. If you install your PCIe 5.0 SSD in a PCIe 4.0 slot, your sequential performance will be throttled to PCIe 4.0 speed.
Do You Need a PCIe 5.0 SSD?
Since these early model PCI 5.0 SSDs have appeared, there have been a lot of hot takes:
- First wave of PCIe 5.0 SSDs arrives with high prices and ridiculous heatsinks | Ars Technica
- Inland PCIe Gen5 SSD costs $350 (2.4x more than Gen4 SSD) and has a loud fan – VideoCardz.com
- 2TB Performance Results – Phison E26 SSD Preview: PCIe 5.0 SSDs Are Finally Here | Tom’s Hardware
- SSTC PCIe 5.0 SSD hits 10GB/s speeds on Intel and AMD platforms – VideoCardz.com
I particularly liked Ars Technica’s opinion that only people with more money than sense would buy an early PCI 5.0 SSD…
Why Would You Want a PCI 5.0 SSD?
Here are some scenarios where it can make sense:
- If you have applications that rely heavily on sequential read/write performance
- For example, some tasks in SQL Server
- If you have more than one PCIe 5.0 SSD in the system
- This will let you copy or move files between drives at higher PCIe 5.0 bandwidth speeds
- You just want one to play with…
- As long as you realize this and set your expectations accordingly, this is fine
What Have I Done?
I have been waiting to get my hands on an M.2 NVMe PCIe 5.0 SSD for quite a while. This is despite their current limitations and the “early adopter” tax. I just wanted to run some benchmarks on it and play with it as an experiment. It also provides fodder for a blog post and possibly a video. I don’t have any illusions about any of this.
Last Thursday, I drove up to Micro Center Denver and bought a 2TB Inland TD510 SSD. It was $349.99 minus 5%, since I used a Micro Center credit card.
Here are some other 2TB M.2 NVMe drives from Micro Center just for price comparison sake:
- 2TB Samsung 990 PRO (PCIe 4.0) – $249.99
- 2TB Samsung 980 PRO (PCIe 4.0) – $169.99
- 2TB Samsung 970 EVO Plus (PCIe 3.0) – $129.99
Inland is the house-brand for Micro Center, and it looks like they just used the exact Phison E26 reference design that Tom’s Hardware reviewed in early January, including the heatsink, fan, and SATA/MOLEX power connectors. The reference design sample actualy used thermal pads instead of thermal putty.

The first thing I did to the drive that I purchased was to take it apart, to get it out of the stock heatsink with the attached horrendous SATA/MOLEX fan. I found that the blue thermal material that had been applied in the factory had terrible coverage, so I removed that.

This was really a terrible factory application of the thermal putty!

Next, I removed the existing 2TB Samsung 990 PRO SSD from an ASUS PCIe 5.0 M.2 card and installed the 2TB Inland TD510 in its place. The M.2 card has thermal pads on both sides of the SSD, and it does not have any fans.

Front of ASUS ROG PCIe 5.0 M.2 card.

Inside of ASUS ROG PCIe 5.0 M.2 card.
Benchmark Results
This is what the 2TB Samsung 990 PRO did in CrystalDiskMark. This is very good performance for a PCIe 4.0 NMVe SSD.

CrystalDiskInfo shows some more details about the 2TB Samsung 990 PRO, showing that it was running in PCIe 4.0 x4 mode (using four PCIe 4.0 lanes).

This is what the 2TB Samsung 990 PRO did in CrystalDiskMark. I was a little surprised at how high the SEQ1M Q1T1 results were. Usually you have to go to higher queue depths to get high sequential performance with NAND SSDs.
The sequential write performance showed a pretty big jump compared to the Samsung 990 Pro (which is no slouch). So far the TD510 drive temps under a heavy sequential load have been comparable to the Samsung 990 PRO.

CrystalDiskInfo shows some more details about the 2TB Inland TD510, showing that it was running in PCIe 5.0 x4 mode (using four PCIe 5.0 lanes).

Final Words
The 2TB Inland TD510 is 40% more expensive than a 2TB Samsung 990 PRO, which is bad. OTOH, it gets 51.4% better sequential write performance and 34.5% better sequential read performance than the 2TB Samsung 990 PRO in CrystalDiskMark. Sequential performance is more important to me than I think it is for most people.
So far, the TD510 drive temps have been just fine without a noisy fan. This should be the case as long as you have some sort of heatsink with some air flow going over it. Heavy sequential workloads are what makes it get hotter than just being idle.
I don’t think most people need to rush out and get an early PCIe 5.0 SSD for a client desktop system. These first models in the marketplace do not fully leverage the PCIe 5.0 sequential bandwith limits, and they are considerably more expensive per GB than the fastest PCIe 4.0 SSDs. Don’t let FOMO drive you into buying something you don’t need…
PCIe 5.0 SSDs are not going to make your system faster for most common daily workload scenarios. Random I/O performance at low queue depths matters much more than sequential performance for most tasks.
Upgrading from a traditional magnetic hard drive to even a low-end AHCI SATA SSD is quite noticeable to most people as they use a system. OTOH, I have seen many benchmarks and blind real-world tests that show that going from an AHCI SATA SSD to a considerably faster (on paper) PCIe 4.0 NVMe SSD is much harder to notice or even detect for most tasks.
If you have any questions about this post, please ask me here in the comments or on Twitter. I am pretty active on Twitter as GlennAlanBerry. Thanks for reading!
Hey Glenn, can you please publish the IOPS results from CrystalDiskMark? Sequential reads/writes are important, but for small random operations such as most of SQL Server physical I/O the IOPS are more important. Thanks.
Hello Glenn, Interesting article.
As Janko I suggest to use IOPS instead of MB/s for the unit for CristalDiskMark Screenshots Results :
– For sequential 1MB the IOPS will give directly the bandwith
– For random 4K the IOPS unit is more common.
Did you test RND4KQ4T64 ?
Regards