Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). As I've explained in the comments in previous reviews, simulating the type of random access you see in a desktop workload is difficult to do. Small file desktop accesses aren't usually sequential but they're not fully random either. By limiting the LBA space to 8GB we somewhat simulate a constrained random access pattern, but again it's still more random than what you'd see on your machine. Your best bet for real world performance is to look at our Storage Bench charts near the end of the review as they accurately record and play back traces of real world workloads.

For our random access tests I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data (data is random within a write, but duplicated between writes) for each write as well as fully random data (data is random within a write and random across most writes) to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why the type of data you write to SF drives matters, read our original SandForce article.

Iometer - 4KB Random Write, 8GB LBA Space, QD=3

The Corsair Nova is our Indilinx Barefoot representative in this preview and you can see how performance has improved with the Martini controller. While the original Indilinx Barefoot traded good sequential performance for slower-than-Intel random performance, Martini fixes the problem. It's not in the class of SandForce's SF-1200, but Indilinx appears to have built a performance equal to Intel's X25-M G2.

Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:

Iometer - 4KB Random Write, 8GB LBA Space, QD=32

Our random read test is similar to the random write test however we lift the 8GB LBA space restriction:

Iometer - 4KB Random Read, QD=3

Random read performance falls short of Intel and basically hasn't changed since the Barefoot. It's not bad at all, but not industry leading.

Introduction Sequential Read/Write Speed
Comments Locked


View All Comments

  • scook9 - Tuesday, November 16, 2010 - link

    Looks like a nice update to barefoot but compared to the Intel G2 it is nothing ground breaking at all. There were also reliability issues with the barefoot that this would have to overcome (at least for me, that is why I have Intel G2s now). The price is also not that exciting given that the Intel G2 120GB just came out and is well priced.
  • Out of Box Experience - Wednesday, November 17, 2010 - link

    There are reliability issues with Sandforce as well

    I Just checked and there are almost as many complaints at New Egg as there are over at OCZ Forums regarding their SSD's

    I could be wrong but it seems most people complaining about bricked drives or losing all their data are the ones who take OCZ's advice and do all the recommended tweaks and Firmware updates

    I personally torture tested both Vertex and Vertex 2 drives without any tweaks or firmware updates and I have never had any trouble with either drive

    I do full formats and partition under XP (Both OCZ No-No's)
    I defragged both drives several times and never used trim yet both drives are working fine

    I think my next torture test will be to use the recommended OCZ tweaks and firmware updates

  • boe - Thursday, November 18, 2010 - link

    I'm anxious to build a new system with a sandy bridge proecessor, an at 6970 or nVidia video card and an SSD drive. However I need about 2TB of total storage and these puny SSD solutions would have been very practical about a decade ago but most of us looking for a high end computer that might include the more expensive SSD need a LOT more space.
  • TF2pro - Friday, November 19, 2010 - link

    Well or course you don't buy an SSD for space, I spent $160 on my 60gb Sandforce drive and that could have bought me 3.5TB worth of mechanical drive space. Would I do it again.. 100 times over.. I have been building reasonably high end systems for myself for years and an SSD is the missing element. If you are looking for 2tb SSD's you shouldn't even be reading this article ... wait 2-3 years and then come back, until then if you really wanna see a jump in the speed of your PC get an SSD. Also 2tb drives are 90 bucks.. so just get both.
  • Qapa - Friday, November 19, 2010 - link

    Well, that's not entirely true... if you have $3k or $4k to spend for SSDs you can buy 900GB-1TB SSDs (just check www.NewEgg.com).

    The question is, does that make sense. Not really for most people.
  • TF2pro - Friday, November 19, 2010 - link

    Well yes of course you COULD get 2tb of SSD... but if you have that much money you probably aren't reading this forum.. your butler is reading it for you...
  • rbarone69 - Sunday, November 21, 2010 - link

    Tech people come in all sizes, shapes and backgrounds. Some have millions, some dont. I am well off but technology is a passion for me. My job and my 'fun' do revolve around tech.

    My point is Anand has some of the most informative artciles on the internet regarding tech. Doesnt matter how much money you have you go where the quality is.
  • marraco - Tuesday, November 16, 2010 - link

    I would like to read tests of ICH10 RAID0 made of different disks.

    Is the performance averaged, or bottlenecked to the slower disk?

    I don’t want opinions, as credible they may be. I want actual real tests.

    Publication of those tests may encourage new kind of RAID controllers, in which the load is balanced between different performing drives.

    Today’s controllers expect similar performance from each drive, so balance load equally to all SSD.

    But let’s say that tomorrow I buy a second, much faster SSD, and I want to do RAID 0 with my Vertex 2.
    A good controller should split the information on sizes according to each drive speed.
  • DanNeely - Tuesday, November 16, 2010 - link

    That only makes sense if the drives capacity differences roughly match their speed differences as well; a usecase I suspect is too uncommon to be worth developing towards.
  • marraco - Tuesday, November 16, 2010 - link

    Smart point. Now it should be taken in account that speed and sizes are increasing by large and simultaneously.

    And it only makes the RAID controller more interesting.

    Is to the user to decide if he wants to be bottlenecked by speed, or be forced to reduce partition size to be able to gains speed at cost of size.

    Let's say that an old 100GB SSD is to be paired with a 150 GB SSD 2X faster (and thus should need to store 2X the space of the slower SSD).

    Then choices are:

    1-Reduce the partition on the slower SSD to 75GB, to match it with the 150GB 2X speed. It would result in a 3X speed improvement (making simplified number for clarity), and 125% increase in space storage. Also, 25 GB in the slower SSD would be free to a non RAID partition, as ICH10 allows today.

    2-Use all the capacity on both drives, as if the new SSD were only 50% faster. It would waste speed, because the improvement would be 2.5X instead of 3X, but no free space would be need reallocation.

    3-Anything on the middle.

Log in

Don't have an account? Sign up now