Recently, I built a new AMD mainstream desktop system with some existing parts that I had available. This system has six storage drives, with various levels of technology and performance. I thought it would be interesting to run CrystalDiskMark 7.0.0 on each of these drives. So, here are some quick comparative CrystalDiskMark results in 2020 from those six drives.
This system has a Gigabyte B550 AORUS MASTER motherboard, which is actually a great choice for a B550 motherboard, especially if you want extra storage flexibility. AMD B550 motherboards only have PCIe 4.0 support from the CPU, not from the B550 chipset.
The B550 AORUS MASTER has three M.2 PCIe 4.0 slots that are all connected to the CPU. If you use the 2nd or 3rd M.2 slot (even for a PCIe 3.0 device), it causes the primary PCIe slot to go from PCIe 4.0 x16 down to x8. That means eight PCIe lanes instead of sixteen lanes. This might sound bad, but it is actually not a problem for most scenarios.
You can minimize any possible impact from this by using a PCIe 4.0 video card instead of a PCIe 3.0 video card. In my case, I have an AMD Radeon 5700XT video card that is PCIe 4.0. The latest generation video cards from both AMD and NVIDIA support PCIe 4.0. PCIe 4.0 has twice the bandwidth per lane compared to PCIe 3.0.
What Six Drives Do We Have?
There are three M.2 PCIe NVMe SSDs, one AIC PCIe NVMe SSD, one 2.5″ SATA SSD, and one 3.5″ SATA HDD.
- 500GB Samsung 980 PRO M.2 PCIe 4.0 NVMe SSD
- (2) 1TB Samsung 970 EVO Plus M.2 PCIe 3.0 NVMe SSD
- 280GB Intel Optane 900P PCIe 3.0 NVMe SSD
- 1TB Samsung 860 SATA AHCI SSD
- 3TB WDC WD30EZRX SATA AHCI HDD
The Samsung 980 PRO is using the Microsoft NVMe driver, while the Samsung 970 EVO Plus drives are using the Samsung NVMe driver version 3.3. Samsung has not released an NVMe driver for the Samsung 980 PRO, and I don’t think they are planning on it. Finally, the Intel Optane SSD is using the Intel NVMe driver version 5.1
Some Quick Comparative CrystalDiskMark Results in 2020
I did some quick and dirty I/O testing of these six drives with CrystalDiskMark 7.0.0. Here they are from the fastest to the slowest. The IOPS are 4K, Q32T16. That high queue depth makes NAND storage look better.
|Device||Seq Reads||Seq Writes||Read IOPS||Write IOPS|
|500GB Samsung 980 PRO||6,852 MB/s||4,932 MB/s||845,540||862,950|
|1TB Samsung 970 EVO Plus||3,570 MB/s||3,350 MB/s||433,815||655,570|
|280GB Intel 900P x4||2,738 MB/s||2,320 MB/s||587,196||563,240|
|280GB Intel 900P x2||1,617 MB/s||1,487 MB/s||385,852||350,423|
|1TB Samsung 860 EVO||565 MB/s||536 MB/s||99,469||92,843|
|3TB WD HDD||158 MB/s||150 MB/s||375||393|
The Samsung 980 PRO is my boot drive, and it is in the M.2 slot closest to the CPU. Keep in mind that this is only the 500GB size of the Samsung 980 PRO. This smaller capacity SKU has lower write performance than the 1TB or upcoming 2TB models.
Even the lowly 500GB Samsung 980 PRO has significantly higher sequential and random I/O performance than a 1TB Samsung 970 EVO Plus.
The Samsung 970 EVO Plus has been one of my favorite M.2 drives for the past year or so. This drive is in the second M.2 slot.
Here is the other 1TB Samsung 970 EVO Plus, which is in the third M.2 slot. The CDM benchmark results between the two drives are pretty consistent, which is a good sign.
This is my small 280GB Optane 900P SSD, which is in the 3rd PCIe slot, the one furthest away from the CPU. It is actually connected to the B550 chipset rather than the CPU. Using CDM to test this drive helped me discover an assembly mistake that I made when I first built the system. More about that later in the post!
In case you don’t know, Intel Optane SSDs don’t use the same technology as NAND flash SSDs. They use phase change memory rather than NAND flash. This gives them much lower latency, better write endurance, and much better random I/O performance at low queue depths compared to NAND flash storage. Optane SSDs also don’t suffer performance degradation as they get close to being full.
This is a “legacy” Samsung 860 EVO 2.5″ SATA AHCI SSD that is connected to SATA port 2, that is connected to the B550 chipset. It shows pretty typical performance for a SATA 3 SSD, which is limited to about 565MB/sec due to SATA 3 bandwidth limits. It also uses the older AHCI protocol.
This is still a good general purpose client SSD that only costs about $100 right now. It is far superior to any magnetic hard drive.
Finally, I have a quite old 3TB Western Digital Green WD30EZRX SATA AHCI 3.5″ 5,400rpm hard drive. The case I am using has two 3.5″ drive bays, and I had some spare SATA ports, so why not? This drive was on the slow side when it was new.
It was marketed as an energy-saving drive that had lower power usage than a “high performance” 7,200rpm desktop hard drive. It has pretty miserable performance compared to all of the other drives in the system, as you would expect. This drive is connected to SATA port 3, which is connected to the B550 chipset.
What Was Your Initial Assembly Mistake?
Well, I initially connected the Samsung 860 EVO SSD to SATA port 4, and the WD Green HDD to SATA port 5. When I did this, all of the drives were recognized and seemed to be working normally.
What I didn’t realize (until I ran CDM) was that the Intel Optane 900P SSD was only running in PCIe 3.0 x2 mode. So two PCIe 3.0 lanes instead of four PCIe 3.0 lanes. This limits your sequential bandwidth to about 1700MB/sec, and also hurts random I/O performance. This is shown in the lower CDM scores below.
An excerpt from the B550 AORUS MASTER motherboard manual explains this issue.
Once I realized this, I had two ways to fix the issue. One would be to move the Intel 900P from the third PCIe slot to the second PCIe slot. The problem with that is that it puts the Intel 900P too close to the Radeon 5700XT video card, which would reduce the effectiveness of the cooling fans on the video card.
How I Fixed The Problem
The other solution would be to connect my two SATA drives to different SATA ports that did not share bandwidth with the 3rd PCIe slot. That is what I did, which was quick and easy.
Many motherboards and chipsets have conflicts like this as you add more devices to your system. If you try to put something in every PCIe slot, every M.2 slot, and every SATA port, you will have probably have some issues. The moral of the story is that you should consult your motherboard manual to avoid making mistakes like this!
What Are You Doing With All Those Drives?
I’ll be running at least two named instances of SQL Server, with my various database files deliberately spread out across these drives.
- C: Samsung 980 PRO – OS, SQL Server binaries, system databases (except tempdb)
- L: Samsung 860EVO – User database log files
- M: WDC Green HDD – SQL Server backups
- R: Samsung 970 EVO Plus – SQL Server user database data files
- S: Samsung 970 EVO Plus – SQL Server user database data files
- T: Intel 900P – tempdb
For larger databases, I plan on having a MAIN file group with two data files that will have one data file on the R: drive and one data file on the S: drive. I’ll also play around with running SQL Server backups to the C: drive, since it can do about 4,900MB/sec of sequential write performance.
How Could You Make This Better?
If I was willing to spend more money, I could have larger capacity, PCIe 4.0 drives in all three M.2 slots. That would give me more total space and more performance. I could also have four SATA SSDs with RAID 10 from the B550 chipset. One big limitation with any B550 motherboard is that you only have a single PCIe 3.0 x4 link between the CPU and the chipset. Moving to an X570 chipset would give you twice the bandwidth between the CPU and chipset. You could also jump up to an AMD HEDT system in order to get many more PCIe 4.0 lanes.
Amazon Affiliate Links
In case you want to buy any of these parts here are my affiliate links.
- Gigabyte B550 AORUS Master on Amazon
- 500GB Samsung 980 PRO M.2 PCIe 4.0 on Amazon
- 1TB Samsung 970 EVO Plus M.2 PCIe 3.0 M.2 on Amazon
- 280GB Intel Optane SSD 900P PCIe 3.0 on Amazon
- 1TB Samsung 860 EVO 2.5″ SATA SSD from Amazon
Another lesson here is that running a quick storage benchmark like CrystalDiskMark is a great initial sanity check for your storage. This is true for laptops and desktops, as well as for servers. Enterprise storage vendors frequently denigrate storage benchmarks, claiming that they are not realistic. Yet I still find storage benchmarks to be useful for showing the relative performance on that benchmark. I hope you have found these quick comparative CrystalDiskMark results in 2020 interesting!
Here are some recent relevant blog posts.
- Building a $5000.00 Desktop Workstation
- What Parts Should I Use For a New PC?
- Upcoming Consumer PCIe 4.0 SSDs
If you have any thoughts or questions about this post, please ask me here in the comments or on Twitter. You can also follow me on Twitter, where I am @GlennAlanBerry. Thank you for reading!