Whole-Drive Fill

This test starts with a freshly-erased drive and fills it with 128kB sequential writes at queue depth 32, recording the write speed for each 1GB segment. This test is not representative of any ordinary client/consumer usage pattern, but it does allow us to observe transitions in the drive's behavior as it fills up. This can allow us to estimate the size of any SLC write cache, and get a sense for how much performance remains on the rare occasions where real-world usage keeps writing data after filling the cache.

The Sabrent Rocket Q takes the strategy of providing the largest practical SLC cache size, which in this case is a whopping 2TB. The Samsung 870 QVO takes the opposite (and less common for QLC drives) approach of limiting the SLC cache to just 78GB, the same as on the 2TB and 4TB models.

Sustained 128kB Sequential Write (Power Efficiency)
Average Throughput for last 16 GB Overall Average Throughput

Both drives maintain fairly steady write performance after their caches run out, but the Sabrent Rocket Q's post-cache write speed is twice as high. The post-cache write speed of the Rocket Q is still a bit slower than a TLC SATA drive, and is just a fraction of what's typical for TLC NVMe SSDs.

On paper, Samsung's 92L QLC is capable of a program throughput of 18MB/s per die, and the 8TB 870 QVO has 64 of those dies, for an aggregate theoretical write throughput of over 1GB/s. SLC caching can account for some of the performance loss, but the lack of performance scaling beyond the 2TB model is a controller limitation rather than a NAND limitation. The Rocket Q is affected by a similar limitation, but also benefits from QLC NAND with a considerably higher program throughput of 30MB/s per die.

Working Set Size

Most mainstream SSDs have enough DRAM to store the entire mapping table that translates logical block addresses into physical flash memory addresses. DRAMless drives only have small buffers to cache a portion of this mapping information. Some NVMe SSDs support the Host Memory Buffer feature and can borrow a piece of the host system's DRAM for this cache rather needing lots of on-controller memory.

When accessing a logical block whose mapping is not cached, the drive needs to read the mapping from the full table stored on the flash memory before it can read the user data stored at that logical block. This adds extra latency to read operations and in the worst case may double random read latency.

We can see the effects of the size of any mapping buffer by performing random reads from different sized portions of the drive. When performing random reads from a small slice of the drive, we expect the mappings to all fit in the cache, and when performing random reads from the entire drive, we expect mostly cache misses.

When performing this test on mainstream drives with a full-sized DRAM cache, we expect performance to be generally constant regardless of the working set size, or for performance to drop only slightly as the working set size increases.

The Sabrent Rocket Q's random read performance is unusually unsteady at small working set sizes, but levels out at a bit over 8k IOPS for working set sizes of at least 16GB. Reads scattered across the entire drive do show a substantial drop in performance, due to the limited size of the DRAM buffer on this drive.

The Samsung drive has the full 8GB of DRAM and can keep the entire drive's address mapping mapping table in RAM, so its random read performance does not vary with working set size. However, it's clearly slower than the smaller capacities of the 870 QVO; there's some extra overhead in connecting this much flash to a 4-channel controller.

Introduction AnandTech Storage Bench
POST A COMMENT

152 Comments

View All Comments

  • Palorim12 - Monday, December 14, 2020 - link

    except it doesn't "constantly rewrite the data on the drive". I suggest you reread Allyn Malvento's indepth article on the fix. If i remember correctly, it rewrites the data once, and then from there, there is an algorithm that keeps a check on the data and the drift. If it notices a substantial change, it rewrite that particular block, not the whole drive. I've have an 840 EVO for years and its still runnin great and haven't noticed any change in TBW. Reply
  • Scour - Tuesday, December 8, 2020 - link

    I mean faster at whole drive fill, which is/was a weak point of TLC, compared to MLC.

    Most of actual TLC-drives are getting slower because of cheap controllers with less channels or to big dynamic Pseudo-SLC-cache.

    But MX500, 860 Evo and Sandisk/WD 3D drives are good enough to beat some of my older MLC-drives.

    I don´t use SSDs for backup, so the long-term-scenario isn´t my usecase.
    Reply
  • emn13 - Saturday, December 5, 2020 - link

    There is an issue of decreasing returns, however.

    SLC -> MLC allowed for 2x capacity (minus some overhead) I don't remember anybody gnashing their teeth to much at that.
    MLC -> TLC allowed for 1.5x capacity (minus some overhead). That's not a bad deal, but it's not as impressive anymore.
    TLC -> QLC allows for 1.33x capacity (minor some overhead). That's starting to get pretty slim pickings.

    Would you rather have a 4TB QLC drive, or a 3TB TLC drive? that's the trade-off - and I wish sites would benchmark drives at higher fill rates, so it'd be easier to see more real-world performance.
    Reply
  • at_clucks - Friday, December 11, 2020 - link

    @SirMaster, "People said the same thing when they moved from SLC to MLC, and again from MLC to TLC."

    You know you're allowed to change your mind and say no, right? Especially since some transitions can be acceptable, and others less so.

    The biggest thing you're missing is that the theoretical difference between TLC and QLC is bigger than the difference between SLC and TLC. Where SLC hasto discriminate between 2 levels of charge, TLC has to discriminate between 8, and QLC between 16.

    Doesn't this sound like a "you were ok with me kissing you so you definitely want the D"? When TheinsanegamerN insists ATers are "techies" and they "understand technology" I'll have this comment to refer him to.
    Reply
  • npz - Friday, December 4, 2020 - link

    exactly. I always suspected that QLC will be used as an excuse to raise (3D) MLC/TLC prices up, or prevent their prices from dropping were it not for QLC. TLC prices now have only fallen back to what I paid a couple years ago Reply
  • flyingpants265 - Sunday, December 6, 2020 - link

    Prediction, they will try to fix the price for 1TB nvme drives to $100 USD for several years. By now it should probably be closer to $50 USD. Reply
  • Spunjji - Monday, December 7, 2020 - link

    "should probably be closer to $50 USD"

    You'd need to provide some evidence to support that, because last I knew it was getting more and more difficult to scale capacity in a way that also reduced cost-per-bit - so you're talking increasing R&D investment and diminishing returns.
    Reply
  • Oxford Guy - Tuesday, December 8, 2020 - link

    QLC is diminishing returns for chasing density ahead of all other factors. Reply
  • magreen - Friday, December 4, 2020 - link

    Why is that useful for NAS? A hard drive will saturate that network interface. Reply
  • RealBeast - Friday, December 4, 2020 - link

    Yup, my eight drive RAID 6 runs about 750MB/sec for large sequential transters over SFP+ to my backup array. No need for SSDs and I certainly couldn't afford them -- the 14TB enterprise SAS drives I got were only $250 each in the early summer. Reply

Log in

Don't have an account? Sign up now