Testing PCIe 4.0

It's been over a year since the first consumer CPUs and SSDs supporting PCIe 4.0 hit the market, so we're a bit overdue for a testbed upgrade. Our Skylake system was adequate for even the fastest PCIe gen3 drives, but is finally a serious bottleneck.

We have years of archived results from the old testbed, which are still relevant to the vast majority of SSDs and computers out there that do not yet support PCIe gen4. We're not ready to throw out all that work quite yet; we will still be adding new test results measured on the old system until PCIe gen4 support is more widespread, or my office gets too crowded with computers—whichever happens first. (Side note: some rackmount cases for all these test systems would be greatly appreciated.)

AnandTech 2017-2020 Skylake Consumer SSD Testbed
CPU Intel Xeon E3 1240 v5
Motherboard ASRock Fatal1ty E3V5 Performance Gaming/OC
Chipset Intel C232
Memory 4x 8GB G.SKILL Ripjaws DDR4-2400 CL15
Software Windows 10 x64, version 1709
Linux kernel version 4.14, fio version 3.6
Spectre/Meltdown microcode and OS patches current as of May 2018

Since introducing the Skylake SSD testbed in 2017, we have made few changes to our testing configurations and procedures. In December 2017, we started using a Quarch XLC programmable power module (PPM), providing far more detailed and accurate power measurements than our old multimeter setup. In May 2019, we upgraded to a Quarch HD PPM, which can automatically compensate for voltage drop along the power cable to the drive. This allowed us to more directly measure M.2 PCIe SSD power: these drives can pull well over 2A from the 3.3V supply which can easily lead to more than the 5% supply voltage drop that drives are supposed to tolerate. At the same time, we introduced a new set of idle power measurements conducted on a newer Coffee Lake system. This is our first (and for the moment, only) SSD testbed that is capable of using the full range of PCIe power management features without crashing or other bugs. This allowed us to start reporting idle power levels for typical desktop and best-case laptop configurations.

Coffee Lake SSD Testbed for Idle Power
CPU Intel Core i7-8700K
Motherboard Gigabyte Aorus H370 Gaming 3 WiFi
Memory 2x 8GB Kingston DDR4-2666

On the software side, the disclosure of the Meltdown and Spectre CPU vulnerabilities at the beginning of 2018 led to numerous mitigations that affected overall system performance. The most severe effects were to system call overhead, which has a measurable impact on high-IOPS synthetic benchmarks. In May 2018, after the dust started to settle from the first round of vulnerability disclosures, we updated the firmware, microcode and operating systems on our testbed and took the opportunity to slightly tweak some of our synthetic benchmarks. Our pre-Spectre results are archived in the SSD 2017 section of our Bench database while the current post-Spectre results are in the SSD 2018 section. Of course, since May 2018 there have been many further related CPU security vulnerabilities found, and many changes to the mitigation techniques. Our SSD testing has not been tracking those software and microcode updates to avoid again invalidating previous scores. However, our new gen4-capable Ryzen test system is fully up to date with the latest firmware, microcode and OS versions.

AnandTech Ryzen PCIe 4.0 Consumer SSD Testbed
CPU AMD Ryzen 5 3600X
Motherboard ASRock B550 Pro
Memory 2x 16GB Mushkin DDR4-3600
Software Linux kernel version 5.8, fio version 3.23

Our new PCIe 4 test system uses an AMD Ryzen 5 3600X processor and an ASRock B550 motherboard. This provides PCIe 4 lanes from the CPU but not from the chipset. Whenever possible, we test NVMe SSDs with CPU-provided PCIe lanes rather than going through the chipset, so the lack of PCIe gen4 from the chipset isn't an issue. (We had a similar situation back when we were using a Haswell system that supported gen3 on the CPU lanes but only gen2 on the chipset.) Going with B550 instead of X570 also avoids the potential noise of a chipset fan. The DDR4-3600 is a big jump compared to our previous testbed, but is a fairly typical speed for current desktop builds and is a reasonable overclock. We're using the stock Wraith Spire 2 cooler; our current SSD tests are mostly single-threaded, so there's no need for a bigger heatsink.

For now, we are still using the same test scripts to generate the same workloads as on our older Skylake testbed. We haven't tried to control for all possible factors that could lead to different scores between the two testbeds. For this review, we have re-tested several drives on the new testbed to illustrate the scale of these effects. In future reviews, we will be rolling out new synthetic benchmarks that will not be directly comparable to the tests in this review and past reviews. Several of our older benchmarks do a poor job of capturing the behavior of the increasingly common QLC SSDs, but that's not important for today's review. The performance differences between new and old testbeds should be minor, except where the CPU speed is a bottleneck. This mostly happens when testing random IO at high queue depths.

More important for today is the fact that our old benchmarks only test queue depths up to 32 (the limit for SATA drives), and that's not always enough to use the full theoretical performance of a high-end NVMe drive—especially since our old tests only use one CPU core to stress the SSD. We'll be introducing a few new tests to better show these theoretical limits, but unfortunately the changes required to measure those advertised speeds also make the tests much less realistic for the context of desktop workloads, so we'll continue to emphasize the more relevant low queue depth performance.

Samsung 980 Pro Cache Size Effects
Comments Locked

137 Comments

View All Comments

  • jeremyshaw - Tuesday, September 22, 2020 - link

    Given how fast the TLC was when the SLC cache was exhausted (and was undoubtedly working on flushing the SLC cache into TLC), I wonder how much faster the native TLC mode of the SSD could be?
  • Billy Tallis - Tuesday, September 22, 2020 - link

    Their ISSCC 2019 presentation about the 512Gbit 128L die (which will be used in the 2TB 980 PRO) claims a write speed of 82MB/s per die. The 1TB 980 PRO is using a total of 32 of the 256Gbit dies, and if it's the same speed then that would work out to 2624 MB/s. So that suggests the total drive fill process is barely slowed down at all by the SLC caching dance, and a datacenter drive using this NAND and controller could hit almost twice the write throughput the current 960GB 983 DCT is rated for.
  • System75 - Wednesday, September 23, 2020 - link

    Why don't you test a fully filled SSD performance anymore like you used to in AnandTech Storage Bench - Heavy? An empty 980 pro drive performance is not what its target consumer wants to know.
  • alyarb - Tuesday, September 22, 2020 - link

    thanks for the memories Samsung, but I'm out
  • nandnandnand - Tuesday, September 22, 2020 - link

    Is the Spirit of Hope dead?
  • Hyoyeon - Tuesday, September 22, 2020 - link

    That SK Hynix P31 could become my new favorite drive.
  • Hifihedgehog - Tuesday, September 22, 2020 - link

    Not quite. The P31 is an amazing value, but I have yet to find a lower latency drive than a Samsung. The P31 does nip at the heels and even surprises in some tests, but it still falls massively short in many latency-sensitive situations where it is easily outclassed by the 970 EVO Plus and above. You get what you pay for.

    https://www.storagereview.com/review/sk-hynix-gold...

    https://www.storagereview.com/review/sk-hynix-gold...
  • lmcd - Tuesday, September 22, 2020 - link

    For laptop usage that latency is not even close to worth it. I'm optimistic the upgrade from a 970 EVO (don't worry, it's primarily for a capacity upgrade) will help my inefficient Ryzen 2700U hold on a bit longer when off the charger.
  • MikeMurphy - Tuesday, September 22, 2020 - link

    The P31 performs admirably and does so while consuming very little power and producing very little heat. It doesn't trounce the Samsung drives in every metric but at that price and power budget it doesn't have to.
  • Samus - Wednesday, September 23, 2020 - link

    If Hynix wanted to crank up the heat and power consumption, there is nothing stopping them operating the controller at a higher frequency to reduce the latency caused by processing overhead.

    But they realize there is no need for this at the moment as they have a product that is class-leading in a class it doesn't even compete in.

Log in

Don't have an account? Sign up now