Burst IO Performance

Our burst IO tests operate at queue depth 1 and perform several short data transfers interspersed with idle time. The random read and write tests consist of 32 bursts of up to 64MB each. The sequential read and write tests use eight bursts of up to 128MB each. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

QD1 Burst IO Performance
Random Read Random Write
Sequential Read Sequential Write

Our burst IO tests show little to no performance differences between the Samsung 870 EVO and other top SATA SSDs. The 1MB sequential transfers are already hitting the SATA throughput limits even at QD1, and the 4kB random IOs are at best marginally improved over Samsung's previous generation. Samsung's slight improvement to random read latency is enough to catch up to Micron's as shown by the Crucial MX500, but a 10% gain hardly matters when NVMe drives can double this performance.

Sustained IO Performance

Our sustained IO tests exercise a range of queue depths and transfer more data than the burst IO tests, but still have limits to keep the duration somewhat realistic. The primary scores we report are focused on the low queue depths that make up the bulk of consumer storage workloads. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Sustained IO Performance
Random Read Random Write
Sequential Read Sequential Write

On the longer synthetic tests that bring in some slightly higher queue depths, the improved random read performance of the 870 EVO is a bit more clear. In one sense it is impressive to see Samsung squeeze a bit more performance out of the same SATA bottleneck, but we're still talking about small incremental refinements where NVMe enables drastic improvements. Aside from random reads, the 870 EVO's performance improvements are exceedingly minute and it should be considered essentially tied with most other recent mainstream TLC SATA drives.

Sustained IO Performance
Random Read Random Write
Sequential Read Sequential Write

Power consumption is one area where Samsung could theoretically offer more significant improvements despite still being constrained by the same SATA interface, but the 870 EVO doesn't really deliver any meaningful improvements there. The 4TB model is consistently a bit less efficient than the 1TB model on account of having more memory to keep powered up, but when comparing the 1TB model against its predecessor and competing drives there's nothing particularly noteworthy about the 870 EVO. SK hynix's Gold S31 has a modest efficiency advantage for random IO while Samsung is technically the most efficient of these SATA drives for sequential IO.

Random Read
Random Write
Sequential Read
Sequential Write

The queue depth scaling behavior of the 870 EVOs is almost identical to the 860 EVOs and still quite typical for mainstream SATA drives. For random reads the 870 EVOs saturate around QD16, while for random writes QD4 suffices. On the sequential IO tests there's only a small performance gain from QD1 to QD16, and the more interesting question is how stable performance is through the rest of the sequential tests. The 1TB 870 EVO seems to run out of SLC cache a bit earlier than the 860 EVO when the sequential write test is running on an 80% full drive, but the 4TB model has plenty of cache to finish out that test at full speed.

Random Read Performance Consistency

This test illustrates how drives with higher throughput don't always offer better IO latency and Quality of Service (QoS), and that latency often gets much worse when a drive is pushed to its limits. This test is more intense than real-world consumer workloads and the results can be a bit noisy, but large differences that show up clearly on a log scale plot are meaningful. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Consistent with most of our other read performance tests, the Samsung 870 EVO shows slightly better average and 99th percentile random read latencies than most of its SATA competition. Even some of the entry-level NVMe drives that can deliver higher random read throughput than is possible for the 870 EVO still have clearly higher latency across most or all of the throughput range that the 870 EVO can cover. A QLC-based or DRAMless TLC NVMe SSD can potentially offer far higher throughput than any SATA SSD, but clearly beating the 870 EVO on both throughput and latency requires stepping up to a more mainstream NVMe design with DRAM and TLC NAND.

Trace Tests: AnandTech Storage Bench and PCMark 10 Advanced Synthetic Tests: Block Sizes and Cache Size Effects
Comments Locked


View All Comments

  • Snowleopard3000 - Friday, February 19, 2021 - link

    It would be nice to see TLC based 16gb and 32gb SSDs..... look at all that empty space, its not like it is that much more work to do it and it will fit fine.
  • Snowleopard3000 - Friday, February 19, 2021 - link

    TB not GB
  • MDD1963 - Friday, February 19, 2021 - link

    With my slightly less than 10 TBW typical usage per year based on my 960 EVO, the 4 TB 870 EVO should last me...240 years? :)
  • Oxford Guy - Friday, February 19, 2021 - link

    I suppose if you're set on buying a Samsung SATA drive, at least if they're all as bad as the 860 1TB QLC drive that has never been stable in a Zen system.
  • toke - Saturday, February 20, 2021 - link

    Isn't SK hynix Gold S31 far better in all graps by a big margin?
  • GeoffreyA - Sunday, February 21, 2021 - link

    This will be a silly question no doubt, but is there no way to implement the NVMe protocol over SATA? Or is it not possible circuit-wise and electrically? (I understand that SATA is serial and NVMe is more parallel in nature.) If this could be done, while keeping a legacy mode for older drives, problem solved: the newer interface in the same port, while retaining backwards compatibility. Or would this idea be disastrous?
  • R3Z3N - Friday, February 26, 2021 - link

    I really wanted to buy 6 8TB SSDs for my build. Instead I went with 2 PCIE NVME SSDS: 980 Pro 1TB boot drive, Sabrent Rocket 4 Plus 2TB, 4 Sata SSDs in Raid 0 on the mobo, and 6 10TB Exos 3.5 drives in raid 10. I get around 1400MBps read on the Raid 10 and 450MBps write with my Highpoint 3720A HBA. It still is a pain to have to create proxy footage to the NVME drives.... I really really want to replace the HDDs with SSDs sooooo bad.
  • Henry 3 Dogg - Friday, July 9, 2021 - link

    "...and power efficiency cannot make big leaps without getting rid of the SATA performance limits."

    OK, I may be being thick. But that just doesn't seem rational at all.
  • PushT - Monday, October 18, 2021 - link

    "Does the world need premium Sata SSDs?" Every single article, you read the same sentiment, formulated more ore less in the same manner. Do we need Sata ssds ? Yes, we need premium large sized sata ssds, as we have needed premium large size HDDs. Increasingly cheaper bulk storage, with the added benefit of EXTREMELY higher random throughput over HDDs, complying with infrastructures all over the world. What is the point of regurgitating the same cheap one- liner in every single article ?
    We have all been using NVME for years now, and yes we know its faster, and yes we know it is the future. But have the thermal challenges been solved ? And more interestingly, is the infrastructure around it changing fast enough for Sata to be discontinued ? If Sata was excluded from computer architecture as of now, could the world cope ? Nope. It is and has always been about price versus performace, and as long as we dont see the demise of HDDs I can't imagine why we would see the death of Sata SSDs.
    So to the original question: Does the world need premium Sata SSDs ? I think the question is wrong. What the industry is doing is trying to find ways to make cheaper nand that is also just as fast. Price versus performance. 870 Evo is not a premium SSD. It is the commercial sample of experimentation. One final question : Given that PREMIUM ssds will cost you an arm and a leg, would you rather your computer system was a hybrid of nvme/Sata SSD, for ultra fast, large and relatively cheap storage and application - Or would you say a few TBs of the insanely fast and expensive premium NVME at the same price, would solve all your problems ? The answer would be the same for most people in the world. Sorry about the rant but it was a long time coming..
  • pentaxmx - Thursday, November 25, 2021 - link

    interesting. Why the reviewer is disappointed when he found out Crucial MX changed 64 layer to 96 layer? Is 96 layer inferior to 64 layer NAND? Shouldn't that be considered an upgrade???

Log in

Don't have an account? Sign up now