Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

G.Skill Phoenix Blade 480GB
Default
25% Over-Provisioning

Even though the Phoenix Blade and RevoDrive 350 share the same core controller technology, their steady-state behaviors are quite different. The Phoenix Blade provides quite substantially higher peak IOPS (~150K) and it is also more consistent in steady-state as the RevoDrive frequently drops below 20K IOPS while the Phoenix Blade doesn't. 

G.Skill Phoenix Blade 480GB
Default
25% Over-Provisioning

 

G.Skill Phoenix Blade 480GB
Default
25% Over-Provisioning

 

TRIM Validation

To test TRIM, I turned to our regular TRIM test suite for SandForce drives. First I filled the drive with incompressible sequential data, which was followed by 60 minutes of incompressible 4KB random writes (QD32). To measure performance before and after TRIM, I ran a one-minute incompressible 128KB sequential write pass.

Iometer Incompressible 128KB Sequential Write
  Clean Dirty After TRIM
G.Skill Phoenix Blade 480GB 704.8MB/s 124.9MB/s 231.5MB/s

The good news here is that the drive receives the TRIM command, but unfortunately it doesn't fully restore performance, although that is a known problem with SandForce drives. What's notable is that the first LBAs after the TRIM command were fast (+600MB/s), so in due time the performance across all LBAs should recover at least to a certain point.

Introduction, The Drive & The Test AnandTech Storage Bench 2013
POST A COMMENT

62 Comments

View All Comments

  • Havor - Sunday, December 14, 2014 - link

    What really sucks is that Intel continues attaching a PCH to the host processor through a four-lane DMI 2.0 connection on even the X99. You only get 2 GB/s of bi-directional throughput.

    So 3 disk R0 or 4 disk R5 is all it takes to saturate the DMI connection between chipset and CPU, even do you got 10x SATA3 connectors.

    On the moment only solutions are M.2, PCIe to have a faster storage solution.

    And for the desktop, only M.2 with native PCIe 3.x 4x will be able to to deliverer cost affectedly solutions, one's they finally have good SSD controllers developed.
    Reply
  • alacard - Sunday, December 14, 2014 - link

    You're preaching to the quire on that one. 2GB per second (actually only 1800MB/s after overhead) divided between 10 SATA ports, 14 USB (6 3.0) ports, Gigabit LAN, and 8 PCI express lanes, is an absolute joke. Reply
  • TheWrongChristian - Monday, December 15, 2014 - link

    What you're missing is that while an SSD at peak speed can saturate a SATA 3 link, and 3 such drives can saturate 2GB/s DMI connection, even the best SSDs can rarely reach such speeds with normal workloads.

    Random (especially low queue depth 4K random) workloads tend to be limited to much lower speeds, and random IO is much more representative of typical workloads. Sequential workloads are usually bulk file copy operations, and how often do you do that?

    So, given your 10x SATA 3 connectors, what workload do you possibly envisage that would require that combined bandwidth? And benchmark dick swinging doesn't count.
    Reply
  • personne - Sunday, December 14, 2014 - link

    My tasks are varied but they often involve opening large data sets and importing them into an inverted index store, at the same time running many process agents on the incoming data as well as visualizing it. This host is also used for virtualization. Programs loading faster is the least of my concerns. Reply
  • AllanMoore - Saturday, December 13, 2014 - link

    Well you could see the blistering speed on 480Gb comparing to 240Gb version, see the table: http://picoolio.net/image/e4O Reply
  • EzioAs - Saturday, December 13, 2014 - link

    I know RAID 0 (especially with 4 drives) theoretically would give high performance but is it really worth the data risks? I do question some laptop manufacturers or PC OEM to actually build a RAID 0 with SSDs for potential customers, it's just not a good practice imo. Reply
  • personne - Monday, December 15, 2014 - link

    RAM is much more volatile than flash or spinning storage yet it has its place. SSDs are in a sense always RAID array since many chips are used. And it's been posted that the failure rate of a good SSD is much less than a HDD, multiple SSDs are still less than a single HDD. And one should always have good backups regardless. So if the speed is worth it it's not at at all unreasonable. Reply
  • Symbolik - Sunday, December 14, 2014 - link

    I have 3x Kingston HyperX 240gb in Raid 0, I have 4 of them, but 3 maxes out my AMDraid gains, it is significant over 2 at around 1000 x 1100 r/w (ATTO diskbench). I have tried 4, the gain was minimal. To get further gains with the 4th, I'd probably need to put in an actual RAID card. I know it's not intel, but it is sandforce. Reply
  • Dug - Friday, December 12, 2014 - link

    You say - "As a result the XP941 will remain as my recommentation for users that have compatible setups (PCIe M.2 and boot support for the XP941) because I'd say it's slightly better performance wise and at $200 less there is just no reason to choose the Phoenix Blade over the XP941, except for compatibility"

    I'm curious, what are you using to determine the XP941 has slightly better performance? It just seems to me most of the benchmarks favor the Phoenix Blade.
    Reply
  • Kristian Vättö - Friday, December 12, 2014 - link

    It's the 2011 Heavy Workload in particular where the XP941 performs considerably better than the Phoenix Blade, whereas in 2013 and 2011 Light suites the difference between the two is quite small. The XP941 also has better low QD random performance, which typically important for desktop workloads. Reply

Log in

Don't have an account? Sign up now