NVMe vs AHCI: Another Win for PCIe

Improving performance is never just about hardware. Faster hardware can only help to reach the limits of software and ultimately more efficient software is needed to take full advantage of the faster hardware. This applies to SSDs as well. With PCIe the potential bandwidth increases dramatically and to take full advantage of the faster physical interface, we need a software interface that is optimized specifically for SSDs and PCIe.

AHCI (Advanced Host Controller Interface) dates back to 2004 and was designed with hard drives in mind. While that doesn't rule out SSDs, AHCI is more optimized for high latency rotating media than low latency non-volatile storage. As a result AHCI can't take full advantage of SSDs and since the future is in non-volatile storage (like NAND and MRAM), the industry had to develop a software interface that abolishes the limits of AHCI.

The result is NVMe, short for Non-Volatile Memory Express. It was developed by an industry consortium with over 80 members and the development was directed by giants like Intel, Samsung, and LSI. NVMe is built specifically for SSDs and PCIe and as software interfaces usually live for at least a decade before being replaced, NVMe was designed to be capable of meeting the industry needs as we move to future memory technologies (i.e. we'll likely see RRAM and MRAM enter the storage market before 2020).

  NVMe AHCI
Latency 2.8 µs 6.0 µs
Maximum Queue Depth Up to 64K queues with
64K commands each
Up to 1 queue with
32 commands each
Multicore Support Yes Limited
4KB Efficiency One 64B fetch Two serialized host
DRAM fetches required

Source: Intel

The biggest advantage of NVMe is its lower latency. This is mostly due to a streamlined storage stack and the fact that NVMe requires no register reads to issue a command. AHCI requires four uncachable register reads per command, which results in ~2.5µs of additional latency.

Another important improvement is support for multiple queues and higher queue depths. Multiple queues ensure that the CPU can be used to its full potential and that the IOPS is not bottlenecked by single core limitation.

Source: Microsoft

Obviously enterprise is the biggest beneficiary of NVMe because the workloads are so much heavier and SATA/AHCI can't provide the necessary performance. Nevertheless, the client market does benefit from NVMe but just not as much. As I explained in the previous page, even moderate improvements in performance result in increased battery life and that's what NVMe will offer. Thanks to lower latency the disk usage time will decrease, which results in more time spend at idle and thus increased battery life. There can also be corner cases when the better queue support helps with performance.

Source: Intel

With future non-volatile memory technologies and NVMe the overall latency can be cut to one fifth of the current ~100µs latency and that's an improvement that will be noticeable in everyday client usage too. Currently I don't think any of the client PCIe SSDs support NVMe (enterprise has been faster at adopting NVMe) but the SF-3700 will once it's released later this year. Driver support for both Windows and Linux exists already, so it's now up to SSD OEMs to release compatible SSDs.

Why We Need Faster SSDs Testing SATA Express
POST A COMMENT

131 Comments

View All Comments

  • MrSpadge - Friday, March 14, 2014 - link

    You're right, the increased interface power consumption won't matter much and will be counterbalanced by the quicker execution time. But as far as I understand the bulk of SSD power draw under load, especially for writes, comes from the actual NAND power draw (unless the controller would be really inefficient). If this is true higher performance automatically equates higher SSD power draw under load. Reply
  • willis936 - Thursday, March 13, 2014 - link

    I love SATAe and NVMe but whenever SAS is mentioned as a comparison it would be nice to use 12G numbers. I noticed a Microsoft graph showed the 6G but didn't even label it. A doubling of bandwidth is nothing to sneeze at. That said SAS is expensive and is for a very different market. Reply
  • Flunk - Thursday, March 13, 2014 - link

    I think they're really making a mistake trying to keep the same connector as SATA. Tacking on a new cable that looks so unwieldy just seems silly. And why not just use M.2 slots? Especially if this is for notebooks (and based on the power usage comparisons it seems like it is, otherwise why would it matter?).

    I suspect this will go nowhere. Reminds me of ISA 2.0.
    Reply
  • Rajinder Gill - Thursday, March 13, 2014 - link

    Backwards compatibility with existing SATA devices was the primary reason for keeping the connector as part of the interface. :) Reply
  • Flunk - Thursday, March 13, 2014 - link

    I understand that, but there isn't much reason not to have 2 sets of ports with the legacy ones slowly disappearing on a desktop. The ports are not large and there is plenty of space. This way we're stuck with a future of badly-designed ports far past the end of SATA's lifetime. Reply
  • Kristian Vättö - Thursday, March 13, 2014 - link

    SATA Express is mainly for desktops -- in mobile M.2 will be the dominant form factor (though SATAe might have some place there too as I mentioned in the article).

    As for power consumption and battery life, that was about PCIe in general.
    Reply
  • phoenix_rizzen - Thursday, March 13, 2014 - link

    So why not add M.2 slots to the desktop, in a vertical orientation, and just make M.2->M.2 cables? Then add the M.2 connector to desktop drives? Reply
  • TheinsanegamerN - Monday, March 24, 2014 - link

    because, silly, that would mean being progressive and eliminating all backwards compatibility, and we CANT do that! /s

    in all seriousness, that would be much nicer. manufacturers would probably throw a temper tantrum, but aside from that, it would be a great solution.
    Reply
  • grahaman27 - Thursday, March 13, 2014 - link

    I would like to see USB 3.1 replace sata6. It sounds unusual, but with the combination of 10Gbps speeds, the new two-way small connector, and integrated power, I think it would really be useful for the expandability and tidiness inside my computer. Reply
  • Veramocor - Thursday, March 13, 2014 - link

    Just posted that later on. I have an external usb 3.0 hardrive why can't I have an internal one. Even better would be thunderbolt 2 at 20 Gbps. Reply

Log in

Don't have an account? Sign up now