The Intel Optane Memory H10 Review: QLC and Optane In One SSD
by Billy Tallis on April 22, 2019 11:50 AM ESTTest Setup
Our primary system for consumer SSD testing is a Skylake desktop. This is equipped with a Quarch XLC Power Module for detailed SSD power measurements and is used for our ATSB IO trace tests and synthetic benchmarks using FIO. This system predates all of the Optane Memory products, and Intel and their motherboard partners did not want to roll out firmware updates to provide Optane Memory caching support on Skylake generation systems. Using this testbed, we can only access the QLC NAND half of the Optane Memory H10.
As usual for new Optane Memory releases, Intel sent us an entire system with the new Optane Memory H10 pre-installed and configured. This year's review system is an HP Spectre x360 13t notebook with an Intel Core i7-8565U Whiskey Lake processor and 16GB of DDR4. In previous years Intel has provided desktop systems for testing Optane Memory products, but the H10's biggest selling point is that it is a single M.2 module that fits in small systems, so the choice of a 13" notebook this year makes sense. Intel has confirmed that the Spectre x360 will soon be available for purchase with the Optane Memory H10 as one of the storage options.
The HP Spectre x360 13t has only one M.2 type-M slot, so in order to test multi-drive caching configurations or anything involving SATA, we made use of the Coffee Lake and Kaby Lake systems Intel provided for previous Optane Memory releases. For application benchmarks like SYSmark and PCMark, the scores are heavily influenced by the differences in CPU power and RAM between these machines so we have to list three sets of scores for each storage configuration tested. However, our AnandTech Storage Bench IO trace tests and our synthetic benchmarks using FIO produce nearly identical results across all three of these systems, so we can make direct comparisons and each test only needs to list one set of scores for each storage configuration.
Intel-provided Optane Memory Review Systems | |||
Platforn | Kaby Lake | Coffee Lake | Whiskey Lake |
CPU | Intel Core i5-7400 | Intel Core i7-8700K | Intel Core i7-8565U |
Motherboard | ASUS PRIME Z270-A | Gigabyte Aorus H370 Gaming 3 WiFi | HP Spectre x360 13t |
Chipset | Intel Z270 | Intel H370 | |
Memory | 2x 4GB DDR4-2666 | 2x 8GB DDR4-2666 | 16GB DDR4-2400 |
Case | In Win C583 | In Win C583 | |
Power Supply | Cooler Master G550M | Cooler Master G550M | HP 65W USB-C |
Display Resolution |
1920x1200 (SYSmark) 1920x1080 (PCMark) |
1920x1080 | 1920x1080 |
OS | Windows 10 64-bit, version 1803 |
Intel's Optane Memory caching software is Windows-only, so our usual Linux-based synthetic testing with FIO had to be adapted to run on Windows. The configuration and test procedure is as close as practical to our usual methodology, but a few important differences mean the results in this review are not directly comparable to those from our usual SSD reviews or the results posted in Bench. In particular, it is impossible to perform a secure erase or NVMe format from within Windows except in the rare instance where a vendor provides a tool that only works with their drives. Our testing usually involves erasing the drive between major phases in order to restore performance without waiting for the SSD's background garbage collection to finish cleaning up and freeing up SLC cache. For this review's Windows-based synthetic benchmarks, the tests that write the least amount of data were run first, and those that require filling the entire drive were saved for last.
Optane Memory caching also requires using Intel's storage drivers. Our usual procedure for Windows-based tests is to use Microsoft's own NVMe driver rather than bother with vendor-specific drivers. The tests of Optane caching configurations in this review were conducted with Intel's drivers, but all single-drive tests (including tests of just one side of the Optane Memory H10) use the Windows default driver.
Our usual Skylake testbed is setup to test NVMe SSDs in the primary PCIe x16 slot connected to the CPU. Optane Memory caching requires that the drives be connected through the chipset, so there's a small possibility that congestion on the x4 DMI link could have an effect on the fastest drives, but the H10 is unlikely to come close to saturating this connection.
We try to include detailed power measurements alongside almost all of our performance tests, but this review is missing most of those. Our current power measurement equipment is unable to supply power to a M.2 slot in a notebook and requires a regular PCIe x4 slot for the power injection fixture. We have new equipment on the way from Quarch to remedy this limitation and will post an article about the upgrade after taking the time to re-test the drives in this review with power measurement on the HP notebook.
60 Comments
View All Comments
Valantar - Tuesday, April 23, 2019 - link
"Why hamper it with a slower bus?": cost. This is a low-end product, not a high-end one. The 970 EVO can at best be called "midrange" (though it keeps up with the high end for performance in a lot of cases). Intel doesn't yet have a monolithic controller that can work with both NAND and Optane, so this is (as the review clearly states) two devices on one PCB. The use case is making a cheap but fast OEM drive, where caching to the Optane part _can_ result in noticeable performance increases for everyday consumer workloads, but is unlikely to matter in any kind of stress test. The problem is that adding Optane drives up prices, meaning that this doesn't compete against QLC drives (which it would beat in terms of user experience) but also TLC drives which would likely be faster in all but the most cache-friendly, bursty workloads.I see this kind of concept as the "killer app" for Optane outside of datacenters and high-end workstations, but this implementation is nonsense due to the lack of a suitable controller. If the drive had a single controller with an x4 interface, replaced the DRAM buffer with a sizeable Optane cache, and came in QLC-like capacities, it would be _amazing_. Great capacity, great low-QD speeds (for anything cached), great price. As it stands, it's ... meh.
cb88 - Friday, May 17, 2019 - link
Therein lies the BS... Optane cannot compete as a low end product as it is too expensive.. so they should have settled for being the best premium product with 4x PCIe... probably even maxing out PCIe 4.0 easily once it launches.CheapSushi - Wednesday, April 24, 2019 - link
I think you're mixing up why it would be faster. The lanes are the easier part. It's inherently faster. But you can't magically make x2 PCIe lanes push more bandwidth than x4 PCIe lanes on the same standard (3.0 for example).twotwotwo - Monday, April 22, 2019 - link
Prices not announced, so they can still make it cheaper.Seems like a tricky situation unless it's priced way below anything that performs similarly though. Faster options on one side and really cheap drives that are plenty for mainstream use on the other.
CaedenV - Monday, April 22, 2019 - link
lol cheaper? All of the parts of a traditional SSD, *plus* all of the added R&D, parts, and software for the Optane half of the drive?I will be impressed if this is only 2x the price of a Sammy... and still slower.
DanNeely - Monday, April 22, 2019 - link
Ultimately, to scale this I think Intel is going to have to add an on card PCIe switch. With the company currently dominating the market setting prices to fleece enterprise customers, I suspect that means they'll need to design something in house. PCIe4 will help some, but normal drives will get faster too.kpb321 - Monday, April 22, 2019 - link
I don't think that would end up working out well. As the article mentions PCI-E switches tend to be power hungry which wouldn't work well and would add yet another part to the drive and push the BOM up even higher. For this to work you'd need to deliver TLC level performance or better but at a lower cost. Ultimately the only way I can see that working would be moving to a single integrated controller. From a cost perspective eliminating the DRAM buffer by using a combination of the Optane memory and HBM should probably work. This would probably push it into a largely or completely hardware managed solution and would improve compatibility and eliminate the issues with the PCI-E bifrication and bottlenecks.ksec - Monday, April 22, 2019 - link
Yes, I think we will need a Single Controller to see its true potential and if it has a market fit.Cause right now I am not seeing any real benefits or advantage of using this compared to decent M.2 SSD.
Kevin G - Monday, April 22, 2019 - link
What Intel needs to do for this to really take off is to have a combo NAND + Optane controller capable of handling both types natively. This would eliminate the need for a PCIe switch and free up board space on the small M.2 sticks. A win-win scenario if Intel puts forward the development investment.e1jones - Monday, April 22, 2019 - link
A solution for something in search of a problem. And, typical Intel, clearly incompatible with a lot of modern systems, much less older systems. Why do they keep trying to limit the usability of Optane!?In a world where each half was actually accessible, it might be useful for ZFS/NAS apps, where the Optane could be the log or cache and the QLC could be a WORM storage tier.