NVMe vs AHCI: Another Win for PCIe

Improving performance is never just about hardware. Faster hardware can only help to reach the limits of software and ultimately more efficient software is needed to take full advantage of the faster hardware. This applies to SSDs as well. With PCIe the potential bandwidth increases dramatically and to take full advantage of the faster physical interface, we need a software interface that is optimized specifically for SSDs and PCIe.

AHCI (Advanced Host Controller Interface) dates back to 2004 and was designed with hard drives in mind. While that doesn't rule out SSDs, AHCI is more optimized for high latency rotating media than low latency non-volatile storage. As a result AHCI can't take full advantage of SSDs and since the future is in non-volatile storage (like NAND and MRAM), the industry had to develop a software interface that abolishes the limits of AHCI.

The result is NVMe, short for Non-Volatile Memory Express. It was developed by an industry consortium with over 80 members and the development was directed by giants like Intel, Samsung, and LSI. NVMe is built specifically for SSDs and PCIe and as software interfaces usually live for at least a decade before being replaced, NVMe was designed to be capable of meeting the industry needs as we move to future memory technologies (i.e. we'll likely see RRAM and MRAM enter the storage market before 2020).

Latency 2.8 µs 6.0 µs
Maximum Queue Depth Up to 64K queues with
64K commands each
Up to 1 queue with
32 commands each
Multicore Support Yes Limited
4KB Efficiency One 64B fetch Two serialized host
DRAM fetches required

Source: Intel

The biggest advantage of NVMe is its lower latency. This is mostly due to a streamlined storage stack and the fact that NVMe requires no register reads to issue a command. AHCI requires four uncachable register reads per command, which results in ~2.5µs of additional latency.

Another important improvement is support for multiple queues and higher queue depths. Multiple queues ensure that the CPU can be used to its full potential and that the IOPS is not bottlenecked by single core limitation.

Source: Microsoft

Obviously enterprise is the biggest beneficiary of NVMe because the workloads are so much heavier and SATA/AHCI can't provide the necessary performance. Nevertheless, the client market does benefit from NVMe but just not as much. As I explained in the previous page, even moderate improvements in performance result in increased battery life and that's what NVMe will offer. Thanks to lower latency the disk usage time will decrease, which results in more time spend at idle and thus increased battery life. There can also be corner cases when the better queue support helps with performance.

Source: Intel

With future non-volatile memory technologies and NVMe the overall latency can be cut to one fifth of the current ~100µs latency and that's an improvement that will be noticeable in everyday client usage too. Currently I don't think any of the client PCIe SSDs support NVMe (enterprise has been faster at adopting NVMe) but the SF-3700 will once it's released later this year. Driver support for both Windows and Linux exists already, so it's now up to SSD OEMs to release compatible SSDs.

Why We Need Faster SSDs Testing SATA Express
Comments Locked


View All Comments

  • phoenix_rizzen - Friday, March 14, 2014 - link

    I was thinking more for the situation where you replace the current SATA ports on a mobo with PCIe x2 slots.

    So you go from cabling your drives to the SATA ports to cabling your drives to the PCIe ports. Without using up any of the slots on the back of the board/case.
  • SirKnobsworth - Saturday, March 15, 2014 - link

    If you don't want to use actual PCIe slots then have M.2 sockets on the motherboard. There's no reason to have another cabling standard.
  • phoenix_rizzen - Monday, March 17, 2014 - link

    That works too, and is something I mention in another comment above.

    This cable and connector doesn't make sense, any way you look at it.
  • Kracer - Thursday, March 13, 2014 - link

    Are you able to run any sort of PCI-Device over SATAe (GPUs, capture cards, etc.)?
    Two lanes are not enough for GPU use but it would open up much more possibilities.
    Are you able to use it as a boot device?
  • The Von Matrices - Thursday, March 13, 2014 - link

    I understand the desire for faster SSDs, but I still fail to see the purpose of SATA express over competing standards. There's nothing compelling about it over the competition.

    M.2 already provides the PCIe x2 interface and bandwidth (albeit without the ability to use cables).
    Motherboards that support PCIe 3.0 SATA Express without either a high priced PCIe switch or compromising discrete graphics functionality are one to two years away.
    SF3700 is PCIe 2.0 x4, meaning that SATA express can only use half its performance and PCIe x4 cards will still be the enthusiast solution.
    NVMe can already be implemented on other standards.
    The cables are bulky, which is unusual considering that SAS at 12Gb/s (which is available) is using the same small connectors as 6Gb/s.
  • SirKnobsworth - Thursday, March 13, 2014 - link

    M.2 provides a PCIe x4 interface in certain configurations. I think the SATAe specification has the provision for adding another two lanes at some point in the future but that's not going to happen for a long time.
  • Kevin G - Thursday, March 13, 2014 - link

    SATAe and NVMe is fast and important for expandable IO. However I believe that it will be secondary over the long term. I fathom that the NAND controller will simply move on-die for mobile SoCs. The reason for this will be power savings, lower physical area and performance reasons. Some of the NVMe software stack will be used here but things like lane limitations will be entirely by-passed since it all on die. Bandwidth would scale by the number of NAND channels. Power savings will come from a reduction in an external component (SATAe controller and/or external chipset) and the ability to integrate with the SoC's native power management controller. Desktop versions of these chips will put the NAND on a DIMM form factor for expansion.

    The SATAe + NVMe will be huge in the server market though. Here RAS plays a bigger role. Features like redundancy and hotswap are important, even with more reliable SSD's compared to their hard drive predecessors. I eventually see a backplane version of a connector like mSATA or M.2 replacing 2.5" hard drives/SSD in servers. This would be great for 1U servers as they would no longer be limited to 10 drives. The depth required on a 1U server wouldn't be as much either. PCIe NVMe cards will fill the same niche today: radically high storage bandwidth at minimal latencies.

    One other thing worth pointing out is that since Thunderbolt encapsulates PCIe, using external SATAe storage at full speed becomes a possibility. Working in NVMe mode is conceptually possible over Thunderbolt too.
  • xdrol - Thursday, March 13, 2014 - link

    Parallel ATA is back, just look at the cable size..
  • JDG1980 - Thursday, March 13, 2014 - link

    A ribbon cable *plus* a Molex? Oh, goody. This looks like a massive step backward.
  • sheh - Thursday, March 13, 2014 - link

    Who doesn't love them flatcables?

Log in

Don't have an account? Sign up now