Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). As I've explained in the comments in previous reviews, simulating the type of random access you see in a desktop workload is difficult to do. Small file desktop accesses aren't usually sequential but they're not fully random either. By limiting the LBA space to 8GB we somewhat simulate a constrained random access pattern, but again it's still more random than what you'd see on your machine. Your best bet for real world performance is to look at our Storage Bench charts near the end of the review as they accurately record and play back traces of real world workloads.

For our random access tests I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data (data is random within a write, but duplicated between writes) for each write as well as fully random data (data is random within a write and random across most writes) to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why the type of data you write to SF drives matters, read our original SandForce article.

Iometer - 4KB Random Write, 8GB LBA Space, QD=3

The Corsair Nova is our Indilinx Barefoot representative in this preview and you can see how performance has improved with the Martini controller. While the original Indilinx Barefoot traded good sequential performance for slower-than-Intel random performance, Martini fixes the problem. It's not in the class of SandForce's SF-1200, but Indilinx appears to have built a performance equal to Intel's X25-M G2.

Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:

Iometer - 4KB Random Write, 8GB LBA Space, QD=32

Our random read test is similar to the random write test however we lift the 8GB LBA space restriction:

Iometer - 4KB Random Read, QD=3

Random read performance falls short of Intel and basically hasn't changed since the Barefoot. It's not bad at all, but not industry leading.

Introduction Sequential Read/Write Speed
Comments Locked


View All Comments

  • jordanclock - Tuesday, November 16, 2010 - link

    Garbage collection is file-system agnostic. It happens on a level between hardware and the file-system. It will work for any and all file-systems.
  • Jaybus - Tuesday, November 16, 2010 - link

    Unlike a spinning disk, there is no penalty for where blocks are physically placed. The controller therefore maps the address of the requested block to the real physical location of the block. This mapping is internal to the drive controller and at a lower level (the block level) than the file system. When a SATA request comes in to write a block that has previously been written to and already contains data, the controller would normally have to erase the block before writing the new data. What garbage collection does is change the mapping so that the write is made to a block that is already erased, while the old block is mapped to a list of blocks that now contain stale (ie. garbage) data. The idea is that this speeds up the writes, since the erase is delayed and performed later when disk activity is idle.

    This doesn't require a TRIM command, since it is all handled internally by the drive controller. It is at the block level, so file system used doesn't matter.
  • melgross - Tuesday, November 16, 2010 - link

    So, and I asked this before, in another thread, is there an advantage to trim. If so, what else is it doing that would make that true? I received no answer there.
  • akedia - Tuesday, November 16, 2010 - link

    TRIM is a command from the operating system to the drive to clean up its dirty sectors and ready them for new data. Background garbage collection is the hard drives quietly doing the same thing by themselves when they're idle. Either way it's a process that must be undergone to keep the SSD operating well.
    The benefit of TRIM is that the operating system is much better at knowing when it's about to access the drive than the drive is at guessing when it's going to be accessed by the operating system, so TRIM can fulfill the function more efficiently and diligently. The big drawback to TRIM is that not all operating systems have it. Thus the Crucial RealSSD C300, which can receive TRIM commands but cannot do background garbage collection, is arguably the best available SSD on Windows 7, which has TRIM, but would have its performance degrade very badly over time on OS X, which does not, while the SandForce SSDs like the OCZ Vertex 2 performs about equally on either system, their own garbage collection taking care of things.
    In short, yes, there is a benefit to TRIM in keeping SSDs functioning optimally over time better than idle garbage collection most of the time, IF you have an operating system AND an SSD that utilize the command.
  • cdillon - Tuesday, November 16, 2010 - link

    Garbage collection doesn't have ANY knowledge of file-systems like some people seem to think. That would be entirely too dangerous for your data. Block devices have absolutely no business erasing your data without the OS asking it to.

    Garbage Collection and TRIM are orthogonal. When new data is written to the SSD, it is written to the pre-erased blocks that exist in the spare block pool. The LBA for that data will now point to the new flash block. Since the contents of the logical blocks are now stored in different physical flash blocks, the old flash blocks now contain orphaned "garbage" data and can be erased and returned to the spare pool. The purpose of Garbage Collection is to erase these previously used blocks and put them back into the spare pool.

    This allows for fast writes and some wear-leveling. If you over-run the spare pool with your writes then you'll see the writes slow down until garbage collection and/or TRIM has recovered at least some of the spare block pool.

    The advantage of TRIM is that it gives the SSD information about ADDITIONAL blocks that can be erased and put into the spare pool above and beyond the fixed spare pool. The smaller the fixed spare pool size is on an SSD, the more TRIM will benefit.
  • melgross - Wednesday, November 17, 2010 - link

    Well, this is what I thought from what I've read.

    But, it also seems to me that you can get performance at least as good as trim with controllers that are aggressive, as aggressive as the trim itself may be. We can see from the new Toshiba controller that performance can remain at about 100% if it's aggressive. Yes, that may lead to reduced life, but that's got to be tested.

    One advantage I see for garbage collection by the drive itself is that it's matched to the drivef and the extra flash should help determine how aggressive it can get. But trim is independent of the drive, as so treats each drive the same. That could result in poorer performance, and possible shorter lifetime.

    I still don't see where any advantage to trim exists.
  • dbt - Tuesday, November 16, 2010 - link

    And.... more importantly, another good article - thankyou !
  • cactusdog - Tuesday, November 16, 2010 - link

    I have a vertex2 that im very happy with it but cant OCZ come up with a different name instead of having different controllers under the same model of "vertex" with very different performance levels. Its a good way to confuse consumers.
  • Shadowmaster625 - Tuesday, November 16, 2010 - link

    I guess this is why you can buy one of the original vertexes (vertices?) for $40 after rebate.
  • mschira - Tuesday, November 16, 2010 - link

    All these 2.5" SSD's are fine for desctops and for replacement in existing Laptops.
    But where is the move towards 1.8" models or better the super small form factor the MacBook air uses? (btw. does anybody know what sort of SSD the Sony Z-Series uses?)

    2.5" drives are getting a limitation for small thin notebooks. Time to get rid of them!!!!

Log in

Don't have an account? Sign up now