Why Do We Need Faster SSDs

The claim I've often seen around the Internet is that today's SSDs are already "fast enough" and that there is no point in faster SSDs unless you're an enthusiast or professional with a desire for maximum IO performance. There is some truth to that claim but the big picture is much broader than that.

It's true that going from a SATA SSD to a PCIe SSD likely won't bring you the same "wow" factor as going from a hard drive to an SSD did, and for an average user there may not be any noticeable difference at all. However, when you put it that way, does a faster CPU or GPU bring you any noticeable increase in performance unless you have a usage model that specifically benefits from them? No. But what happens if the faster component doesn't consume any more power than the slower one? You gain battery life!

If you go back in time and think of all the innovations and improvements we've seen over the years, there is one essential part that is conspicuously absent—the battery. Compared to other components there haven't been any major improvements to the battery technology and as a result companies have had to rely on improving other components to increase battery life. If you look at Intel's strategy for its CPUs in the past few years, you'll notice that mobile and power saving have been the center of attention. It's not the increase in battery capacity that has brought us things like 12-hour battery life in 13" MacBook Air but the more efficient chip architectures that can provide more performance while not consuming any more power. The term often used here is "race to idle" because ultimately a faster chip will complete a task faster and can hence spend more time idling, which reduces the overall power consumption.

SSDs are no exception to the rule here. A faster SSD will complete IO requests faster and will thus consume less power in total because it will be idling more (assuming similar power consumptions at idle and under load). If the interface is the bottleneck, there will be cases when the drive could complete tasks faster if the interface was up for that. This is where we need PCIe.

To demonstrate the importance of an SSD from the battery life perspective, let's look at a scenario with a hypothetical laptop. Let's assume our hypothetical laptop has a 50Wh battery and only has two power states: light and heavy use. While in light use, the SSD in our laptop consumes 1W and 3W under heavier load. The other components consume the rest of the power and to keep things simple let's assume their power consumptions are constants and do not depend on the SSD.
 
Our Hypothetical Laptop
Power Consumption Light Use Heavy Use
Whole Laptop 7W 20W
SSD 1W 3W

Our hypothetical laptop spends 80% of its time in light use and 20% of the time under heavier load. With such characteristics, the average power consumption comes in at 9.6W and with a 50Wh battery we should get a battery life of around 5.2 hours. The scenario here is something you could expect from an ultraportable like the 2013 13" MacBook Air because it has a 54Wh battery, consumes around 6-7W while idling and manages 5.5 hours in our Heavy Workload battery life test.

Now the SSD part. In our scenario above, the average power consumption of our SSD was 1.4W but in this case that was a SATA 6Gbps design. What if we took a PCIe SSD that was 20% faster in light use scenario and 40% in heavy use? Our SSD would spend the saved time idling (with minimal <0.05W power consumption) and the average power consumption of the SSD would drop to 1.1W. That's a 0.3W reduction in the average power consumption of the SSD as well as the system total. In our hypothetical scenario, that would bring a 10-minute increase in battery life.

Sure, ten minutes is just ten minutes but bear in mind that a single component can't do miracles to battery life. It's when all components become a little bit faster and more efficient that we get an extra hour or two of battery life. In a few years you would lose an hour of battery life if the development of one aspect suddenly stopped (i.e. if we got stuck to SATA 6Gbps for eternity), so it's crucial that all aspects are actively developed even though there may not be noticeable improvements immediately. Furthermore, the idea here is to demonstrate what faster SSDs provide in addition to increased performance—in the end the power savings depend on one's usage and in workloads that are more IO intensive the battery life gains can be much more significant than 10 minutes. Ultimately we'll also see even bigger gains once the industry moves from PCIe 2.0 to 3.0 with twice the bandwidth.

4K Video: A Beast That Craves Bandwidth

Above I tried to cover a usage scenario that applies to every mobile user regardless of their workload. However, in the prosumer and professional market segments the need for higher IO performance already exists thanks to 4K video. At 24 frames per second, uncompressed 4K video (3840x2160, 12-bit RGB color) requires about 900MB/s of bandwidth, which is way over the limits of SATA 6Gbps. While working with compressed formats is rather common in 4K due to the storage requirements (an hour of uncompressed 4K video would take 3.22TB), it's not uncommon for professionals to work with multiple video sources simultaneously, which even with compressing can certainly exceed the limits of SATA 6Gbps.

Yes, you could use RAID to at least partially overcome the SATA bottleneck but that add costs (a single PCIe controller is cheaper than two SATA controllers) and especially with RAID 0 the risk of array failure is higher (one disk fails and the whole array is busted). While 4K is not ready for the mainstream yet, it's important that the hardware base be made ready for when the mainstream adoption begins.

What Is SATA Express? NVMe vs AHCI: Another Win for PCIe
POST A COMMENT

131 Comments

View All Comments

  • Guspaz - Thursday, March 13, 2014 - link

    The only justification for why anybody might need something faster than SATA6 seems to be "Uncompressed 4K video is big"...

    Except nobody uses uncompressed 4K video. Nobody uses it precisely BECAUSE it's so big. 4K cameras all record to compressed formats. REDCODE, ProRes, XAVC, etc. It's true that these still produce a lot of data (they're all intra-frame codecs, which mean they compress each frame independently, taking no advantage of similarities between frames), but they're still way smaller than uncompressed video.
    Reply
  • JarredWalton - Thursday, March 13, 2014 - link

    But when you edit videos, you end up working with uncompressed data before recompressing, in order to avoid losing quality. Reply
  • willis936 - Thursday, March 13, 2014 - link

    The case you described (4K, 12bpc, 24fps) would also take an absolutely monumental amount of RAM. I can't think of using a machine with less than 32GB for that and even then I feel like you'd run out regularly. Reply
  • Guspaz - Thursday, March 13, 2014 - link

    Are you rendering from Premiere to uncompressed video as an intermediate format before recompressing in some other tool? If you're working end-to-end with Premiere (or Final Cut) you wouldn't have uncompressed video anywhere in that pipeline. But even if you're rendering to uncompressed 4K video for re-encoding elsewhere, you'd never be doing that to your local SSD, you'd be doing it to big spinning HDDs or file servers. One hour of uncompressed 4K 60FPS video would be ~5TB. Besides, disk transfer rates aren't going to be the bottleneck on rendering and re-encoding uncompressed 4K video. Reply
  • Kevin G - Thursday, March 13, 2014 - link

    That highly depends on the media you're working with. 4K consumes far too much storage to be usable in an uncompressed manner. Upto 1.6 GByte/s is needed for uncompressed recording. A 1 TB drive would fill up in a less than 11 minutes.

    As mentioned by others, losses compression is an option without any reduction in picture quality, though at the expensive of high performance hardware needed for recording and rendering.
    Reply
  • JlHADJOE - Thursday, March 13, 2014 - link

    You pretty much have to do it during recording.

    Encoding 4k RAW needs a ton of CPU that you might not have inside your camera, not to mention you probably don't want any lossy compression at that point because there's still a lot of processing work to be done.
    Reply
  • JlHADJOE - Friday, March 14, 2014 - link

    Here's the Red Epic Dragon, a 6k 100fps camera. It uses a proprietary SSD array (likely RAID 0) for storage:

    http://www.red.com/products/epic-dragon#features
    Reply
  • popej - Thursday, March 13, 2014 - link

    "idling (with minimal <0.05W power consumption)"
    Where did you get this value from? I'm looking at your SSD reviews and clearly see, that idle power consumption is between 0.3 and 1.3W, far away form quoted 0.05W. What is wrong, your assumption here or measurements at reviews? Or maybe you measure some other value?
    Reply
  • Kristian Vättö - Thursday, March 13, 2014 - link

    <0.05W is normal idle power consumption in a mobile platform with HIPM+DIPM enabled: http://www.anandtech.com/bench/SSD/732

    We can't measure that in every review because only Anand has the equipment for that. (requires a modified laptop).
    Reply
  • dstarr3 - Thursday, March 13, 2014 - link

    How does the bandwidth of a single SATAe SSD compare to two SSDs on SATA 6GB/s in Raid0? Risk of failure aside. Reply

Log in

Don't have an account? Sign up now