Comments Locked

57 Comments

Back to Article

  • Ralos - Saturday, March 27, 2010 - link

    Hi Anand,

    Using Firefox 3.6.2 with the security options activated, your site appears blocked for security reason. Untrustworthy. www.anandtech.com/storage specifically.

    Thought you'd like to be informed of this misunderstanding.
  • StormyParis - Saturday, March 27, 2010 - link

    what is the CPU usage ?
  • psychobriggsy - Friday, March 26, 2010 - link

    It would have been nice to see the Marvell speeds when attached by PCIe on the AMD board. Seems like an obvious thing to include to be honest.

    Impressive write speeds for the AMD controller, which gives a lot of hope that they can improve the read speeds, as they indicate they can with their in-house test bed.

    AMD should bulk up their test bed with retail motherboards as well, so that they don't just test in ideal circumstances.
  • assassin37 - Friday, March 26, 2010 - link

    Anand,

    Im torn,
    I have the Gigabyte X58-ud3r system with I7, I also have the Gigabyte AMD 890GPA-UD3H with Phenom 965. Lastly I have a 256 crucial C300 and 2 vertex 120's,I have to return 1(mobo-cpu) setup to new egg soon, what would you do, sorry I know this is not a comment
  • TrackSmart - Friday, March 26, 2010 - link

    I'm not Anand, so I can't say what he would do. But honestly, it's a matter of your personal preference and priorities.

    I like to support competition, so I put together an AMD Phenom II X4 system instead of an Intel Core i5 750 system. I chose AMD because they offered me similar performance per dollar (they were slightly cheaper but had slightly lower performance), plus I felt good about supporting much-needed competition in the CPU market.

    What are YOUR priorities? Maximum performance? Supporting competition in the CPU/GPU market? Best performance per dollar? Most energy efficient?

    That should be what makes your decision. The hardware you listed will all be blazingly fast, whatever you decide. The Intel platform offers potentially higher performance, but probably at slightly higher cost. Your choice. Same for the SSDs.

    [sorry if that wasn't a "you should do this" kind of answer.]
  • wiak - Thursday, March 25, 2010 - link

    what about Highpoint Marvell 6Gbps PCIe 2.0 card on AMD 7-Series chipset?
    for me that has no USB3 or SATA 6Gbps on my AMD 790FX motherboard

    it will make this article fully complete, its the only thing thats missing! :)
  • georgekn3mp - Thursday, March 25, 2010 - link

    About the two different Marvell controllers 88SE9123 and the 88SE9128...the older 9123 does NOT support RAID and the newer 9128 DOES natively support RAID 0, 1 and 5.

    Unfortunately on my Asus P6X58D, the controller is the older 9123 so the only way I could RAID a SATA-III SSD (or even mechanical) drive is using "Windows" Raid, not firmware on the controller. Whether it hurts performance though is harder to say since I can't test it yet ;).

    I have been planning on the 256GB RealSSD for a couple of months now and am happy they started shipping...as one of the main reasons I had picked the Asus board was the native USB3 and SATA-III support. Unfortunately it does not support the RAID function but at almost 750 a drive I was not going to RAID for a while anyway....I AM happy I went with X58 for sure!

    It seems the newer Gigabyte boards UD4 or higher do have the newer controller and are better for RAID SSD expecially now that it is hardware supported...the open question no one has been able to answer is if the Marvell 88SE9128 will pass TRIM commands to a RAID SSD set. So far Gigabyte boards are the only ones with that controller it appears...

    Intel just updated their ICH10R chipset firmware to pass TRIM to SSDs in RAID...hopefully Marvell does too.

    Since the disk speed is the bottleneck on my new computer, $750 is worth it just to prompt me to Crossfire my 5850 because the bottleneck shifted to graphics....especially with i7-920 OC to 4Ghz ;)
  • deviationer - Friday, March 26, 2010 - link

    So the p6x58d does have the PLX chip?
  • Mark McGann - Monday, May 10, 2010 - link

    The p6x58D premium apparently does not according to this link

    http://benchmarkreviews.com/index.php?option=com_c...

    Don't know about the newer p6x58D-E
  • KaarlisK - Thursday, March 25, 2010 - link

    Software RAID is definitely no slouch:
    http://kmwoley.com/blog/?p=429">http://kmwoley.com/blog/?p=429
    But this comparison used a very old ICH.
  • sparkuss - Thursday, March 25, 2010 - link

    Anand,

    I was going to maybe get two C300's for my current build. Do we consumers need to wait for your update before we invest in these?

    We know it died, but I haven't been able to find any other reliability statistics collated anywhere to make a buying decision on.
  • sparkuss - Thursday, March 25, 2010 - link

    Sorry, I missed the Update link in the upper corner.
  • vol7ron - Thursday, March 25, 2010 - link

    Great review. Not much to be said. There was a little bit of puffery at the end, in AMDs favor.

    I'm sure most companies have faster controllers/BIOSs to be released. Rather than saying AMD is something to look out for, for some reason I'd think Intel would have something greater.

    As you mentioned, the on-die controller should have lower latencies - could you ask them about this? Perhaps some of the PCI bandwidth is being chewed up by something else, or perhaps the latencies are too low, causing a check/repeat bottleneck? (or maybe this a marketing ploy to release something faster in the future)
  • Dzban - Thursday, March 25, 2010 - link

    Because AMD has native 6Gbps and they are improving drivers. With intel chipsets you can't phisicly increase speed further.
  • vol7ron - Thursday, March 25, 2010 - link

    I don't like how Intel switches between [Mb/s & Gb/s] and [MB/s & GB/s]. It'd be nicer to not have to translate 480Mbps into 60MB.

    I guess the issue was at first past I almost equated the 480Mb/s to the 500MB/s right under it.
  • jejeahdh - Thursday, March 25, 2010 - link

    You should not type dates in that format, and if you had an editor, he or she should absolutely stop you from doing such things. People have expectations. You might think it's no worse than the ever-present traditional ambiguous formats of the US and Europe (m/d/yy(yy), d/m/yy(yy)) which are bad enough, but at least it's an old and well recognized problem that people are used to living with, so long as it uses slashes. People with knowledge of standards, though, use dashes for ISO date format, yyyy-mm-dd which is also perfectly sortable. By mixing and matching styles haphazardly, you're only propagating the notion that anything goes, causing people to stop and wonder for 12 days out of every month. If you're deliberately adopting the style commonly used in the Netherlands (I had to look it up) and advocating its use for an international audience, I cannot imagine why.

    I know it seems crazy to harp on this and I kind of agree . . . but I am just so surprised to see it here, written by a detail oriented technically minded accomplished writer.
  • strikeback03 - Friday, March 26, 2010 - link

    If this is in response to the IOMeter build, that might be the way it was named by its creator, not Anand. Also, I would imagine 6-22-2008 is m-dd-yyyy
  • assassin37 - Thursday, March 25, 2010 - link

    Hey Anand, Why isn't the X-58 gigabyte native 6gbs board on the write benchmarks?
  • assassin37 - Thursday, March 25, 2010 - link

    never mind I read why, legacy mode
  • vailr - Thursday, March 25, 2010 - link

    Intel releases SSD friendly AHCI/RAID driver:
    http://www.pcper.com/#NewsID-8538">http://www.pcper.com/#NewsID-8538
  • assassin37 - Thursday, March 25, 2010 - link

    why isnt the x-58 gigabyte native 6gbs board on the write benchmarks?
  • blacksun1234 - Thursday, March 25, 2010 - link

    I would like to see HD Tune & HD Tach Average Read speed with Crucial HDD for each chipset. With this benchmark, AMD SB850 can beat Marvell's solution a lot!
  • Nickel020 - Thursday, March 25, 2010 - link

    There's a small error on page 4, that's an X58A-UD3R you've got there, not an X58-UD3R.

    Also, there seem to be two different Marvell 6G controllers, the 88SE9123 and the 88SE9128, what's difference between these two?
  • Nickel020 - Thursday, March 25, 2010 - link

    Finished reading, very interesteing results :)

    I find it really strange that P55 performs so poorly, I wonder whether it also performs poorly when used with SATA 3G SSDs, seeing as I'm just about two migrate my Vertex 60GB RAID 0 from P45&ICH10R to P55.
    Would be great if you could look into that as well, better storage performance would be a major reason to buy S1366 instead of S1156.
  • Etern205 - Thursday, March 25, 2010 - link

    If it's possible, mind adding the Asus U3S6 to your test (in a updated article) since that card uses a PCIe x4 interface.
    Thank You! :)

    The card
    http://www.newegg.com/Product/Product.aspx?Item=N8...">http://www.newegg.com/Product/Product.a...&cm_...
  • nerdtalker - Thursday, March 25, 2010 - link

    That's an interesting card, since it appears from the photo to incorporate the 4x PCIe 1.0 PLX controller, or essentially the same on-motherboard solution ASUS was using.

    That seems like a much more interesting card to test.
  • 7Enigma - Thursday, March 25, 2010 - link

    Hi Anand,

    I have to admit that this particular article was a bit confusing for me. Probably because the test rigs are so similar in name I was going back and forth. My question is how does this article's results correlate to earlier boards (P45 for me in particular)? Am I understanding things correctly to assume that sticking a 6Gbps SATA card would actually be detrimental to performance in my rig if I was to get a new SSD in the coming months?

    Thanks for the informative article.
  • semo - Thursday, March 25, 2010 - link

    Hi Anand,

    On the 1st page, were you comparing Vertex LE performance on 890GX vs X58 or H55? And also do you have any comments on why it's random read is slower than the random write. AFAIK this is the only SSD with such characteristics.

    Thanks
  • Casper42 - Thursday, March 25, 2010 - link

    I noticed the same. Text says compared to X58 but both charts on page 1 say H55.
  • Exodite - Thursday, March 25, 2010 - link

    With the Thuban hexa-cores and 890X/FX boards in the pipeline AMD looks better and better for my next rig. After building a 790FX/PII 965BE rig for a friend, however, I were worried by the obviously poor disk performance even in comparison to my old P35/E6600 setup with an older HDD.

    I appreciate being kept up to date with this development as I see disk performance as the only major drawback of the platform at this point.
  • iwodo - Thursday, March 25, 2010 - link

    HDD performance has never really been the centre of discussion. Since they are always slow anyway. But with SSD, it has finally show SATA controller makes a lot of different.

    So what can we expect in future SATA controller? Are there any more performance we can squeeze out.
  • KaarlisK - Thursday, March 25, 2010 - link

    Do the P55 boards allow plugging in a graphics card in one x16 slot, and an IO card in the other x16 slot?
    According to Intel chipset specs, only the server versions of the chipset should allow that.
  • CharonPDX - Thursday, March 25, 2010 - link

    You talk about combining four PCIe 1.0 lanes to get "PCIe 2.0-like performance".

    PCIe doesn't care what generation it is. It only cares about how much bandwidth.

    Four PCIe 1.0 lanes will provide DOUBLE the bandwidth of one PCIe 2.0 lane. (4x250=1000 each way, 1x500=500 each way.)

    The fact that ICH10 and the P/H55 PCHs have 6-8 PCIe 1.0 lanes is more than enough to dwarf the measly 2 PCIe 2.0 lanes the AMD chipset has. (6x250=1500 or 8x250=2000 are both greater than 2*500=1000.) Irregardless, all three chipsets only have 2 GB/s between those PCIe ports and the memory controller.

    Why Highpoint cheaped out and put a two-port SATA 6Gb/s controller on a one-lane PCIe card is beyond me. Even at PCIe 2.0, that's still woefully inadequate. That REALLY should be on a four-lane card. Nobody but an enthusiast is going to buy it right now, and more and more "mainstream" boards are coming with 4-lane PCIe slots.

    By the way, the 4-lane slot on the DX58SO is PCIe 2.0, per http://downloadmirror.intel.com/18128/eng/DX58SO_P...">http://downloadmirror.intel.com/18128/eng/DX58SO_P...

    The fact that you have dismal results on a "1.0 slot" has nothing to do with it being 1.0, and everything to do with available bandwidth. If you put the exact same chip on a PCIe 1.0 4-lane card, you would see identical performance (possibly better, if it's more than enough to saturate 500 MB/s) than your one-lane card in a PCIe 2.0 slot. (I would have liked to see performance numbers running that card on the AMD's PCIe 2.0 one-lane slot.)
  • Anand Lal Shimpi - Thursday, March 25, 2010 - link

    The problem is that all of the on-board controllers and the cheaper add-in cards are all PCIe 2.0 x1 cards.

    Intel's latest DX58SO BIOS lists what mode the PCIe slots are operating in and when I install the HighPoint card in the x4 it lists its operating mode as 2.5GT/s and not 5.0GT/s. The x16 slots are correctly listed as 5.0GT/s.

    Take care,
    Anand
  • qwertymac93 - Thursday, March 25, 2010 - link

    while the sb850 has 2 pci-e 2.0 lanes, the 890gx northbridge has 16 for graphics cards, and another 6 lanes for anything else(thats 24 in total, btw). the southbridge is connected to the northbridge with something similar to 4 pci-e 2.0 lanes, thus 2GB/s (16 gigaBITS/s). i have no idea why you think the "measly" two lanes coming off the southbridge mean anything about its sata performance, nor do i understand why you think the 6 lanes coming off of intels h55(being fed by a slow dmi link) are somehow better.

    P.S. i don't think "irregardless" is a word, its sorta like a self contained double-negative. "ir"= not or without, "regard" care or worth, "less" not or without. "irregardless"= not without care or worth.
  • CharonPDX - Thursday, March 25, 2010 - link

    Both the SB850 and the Intel chipsets have 2 GB/s links between the NB and SB (or CPU and SB, in the P/H55.)

    And you are correct, I was not referring at all to the SB850's onboard SATA controller; solely to its PCIe slots. Six lanes of PCIe 1.0 has more available bandwidth than two lanes of PCIe 2.0. This comes in to play when using an add-in card.

    (Yes, I know "irregardless" isn't a real word, it's just fun to use.)
  • CharonPDX - Thursday, March 25, 2010 - link

    P.S. Go get a Highpoint RocketRAID 640. It has the exact same SATA 6Gb chip as the card you used, but on a x4 connector (and with four SATA ports instead of two, and with RAID. But if you're only running one drive, it should be identical.) Run it in the PCIe 1.0 x4 slot on the P55 board. Compare that to the x4 slot on the 890GX board. I bet you'll see *ZERO* difference when running just one drive.

    In fact, I bet on the 890GX board, you'll see the exact same performance on the RR640 in the x4 slot as on the Rocket 600 in the x1 slot.
  • oggy - Thursday, March 25, 2010 - link

    I would be fun to see some dual C300 action :)
  • wiak - Friday, March 26, 2010 - link

    yes, on both AMD 6Gbps SB850, Marvell 6Bps on AMD and Intel ;)
  • Ramon Zarat - Thursday, March 25, 2010 - link

    Unfortunately, testing with 1 drive give us only 1/3 on the picture.

    To REALLY saturate the SATA3/PCIe bus, 2 drives in stripped RAID 0 should have been used.

    To REALLY saturate everything (SATA3/USB3/PCIe)AT THE SAME TIME, an external SATA3 to USB3 SSD cradle transferring to/from 2 SSD SATA3 drives in stripped RAID 0 should have been used.

    The only thing needed to get a complete and definitive picture to settle this question once and for all would have been 2 more SATA3 SSDs and a cradle...

    Excellent review, but incomplete in my view.
  • vol7ron - Thursday, March 25, 2010 - link

    It would be extremely nice to see any RAID tests, as I've been asking Anand for months.

    I think he said a full review is coming, of course he could have just been toying with my emotions.
  • nubie - Thursday, March 25, 2010 - link

    Is there any logical reason you couldn't run a video card with x15 or x14 links and send the other 1 or 2 off to the 6Gbps and USB 3.0 controllers?

    As far as I am concerned it should work (and I have a geforce 6200 modified to x1 with a dremel that has been in use for the last couple years).

    Maybe the drivers or video bios wouldn't like that kind of lane splitting on some cards.

    You can test this yourself quickly by applying some scotch tape over a few of the signal pairs on the end of the video card, you should be able to see if modern cards have any trouble linking at x9-x15 link widths.
  • nubie - Thursday, March 25, 2010 - link

    Not to mention, where are the x4 6Gbps cards?
  • wiak - Friday, March 26, 2010 - link

    the marvell chip is a pcie 2.0 x1 chip anyway so its limited to that speed regardless of interface to motherboard

    atleast this says so
    https://docs.google.com/viewer?url=http://www.marv...">https://docs.google.com/viewer?url=http..._control...

    same goes for USB 3.0 from NEC, its also a PCIe 2.0 x1 chip
  • JarredWalton - Thursday, March 25, 2010 - link

    Like many computer interfaces, PCIe is designed to work in powers of two. You could run x1, x2, x4, x8, or x16, but x3 or x5 aren't allowable configurations.
  • nubie - Thursday, March 25, 2010 - link

    OK, x12 is accounted for according to this:

    http://www.interfacebus.com/Design_Connector_PCI_E...">http://www.interfacebus.com/Design_Connector_PCI_E...

    [quote]PCI Express supports 1x [2.5Gbps], 2x, 4x, 8x, 12x, 16x, and 32x bus widths[/quote]

    I wonder about x14, as it should offer much greater bandwidth than x8.

    I suppose I could do some informal testing here and see what really works, or maybe do some internet research first because I don't exactly have a test bench.
  • mathew7 - Thursday, March 25, 2010 - link

    While 12x is good for 1 card, I wonder how feasible would 6x do for 2 gfx cards.
  • nubie - Thursday, March 25, 2010 - link

    Even AMD agrees to the x12 link width:

    http://www.amd.com/us-en/Processors/ComputingSolut...">http://www.amd.com/us-en/Processors/Com.../0,,30_2...

    Seems like it could be an acceptable compromise on some platforms.
  • JarredWalton - Thursday, March 25, 2010 - link

    x12 is the exception to the powers of 2, you're correct. I'm not sure it would really matter much; Anand's results show that even with plenty of extra bandwidth (i.e. in a PCIe 2.0 x16 slot), the SATA 6G connection doesn't always perform the same. It looks like BIOS tuning is at present more important than other aspects, provided of course that you're not an x1 PCIe 1.0.
  • iwodo - Thursday, March 25, 2010 - link

    Well we are speaking in terms of Gfx, with So GFX card work instead of 16x, work in 12x. Or even 10x. Thereby saving IO space,just wondering what are the status of PCI-E 3.0....
  • Shadowmaster625 - Tuesday, March 30, 2010 - link

    It sounds like AMD made a conscious decision to focus on maximum random write performance, even if it required sacrificing all other key performance metrics. I hope that is the case, because it is pretty sad that their 6 gbps controller is generally outperformed by a 3 gbps controller!
  • astewart999 - Tuesday, March 30, 2010 - link

    When talking performance, why are they not mentioning RAID 0. I suspect SATA3 is not capable?
  • astewart999 - Tuesday, March 30, 2010 - link

    Ignore my ignorance, I read the article then posted. Should have read the posts and ignored the article!
  • nexox - Monday, April 5, 2010 - link

    Just get a SASII (6Gbit) PCI-E HBA (LSI makes one, probably others) - plenty of speed, they generally run on a PCI-E 8x slot, and you can run SATA drives in them just fine. Plus they tend not to cost too much more than the consumer-level SATA adaptors, which are apparently questionable performance-wise. They'd at least make a good baseline for comparison.
  • supremelaw - Saturday, April 17, 2010 - link

    RS2BL040 and RS2BL080 are now at Newegg:

    http://www.newegg.com/Product/Product.aspx?Item=N8...
    http://www.intel.com/products/server/raid-controll...

    http://www.newegg.com/Product/Product.aspx?Item=N8...
    http://www.intel.com/products/server/raid-controll...

    Before buying, confirm whether or not TRIM will work with SSDs in RAID modes.

    http://www.pcper.com/comments.php?nid=8538

    *** UPDATE ***

    The unconfirmed bit has been confirmed as unconfirmed from Intel:

    “Intel® RST 9.6 supports TRIM in AHCI and pass through modes for RAID. A bug has been submitted to change the string that indicates TRIM is supported on RAID volumes (0,1,5,10). Intel is continuing to investigate the ability of providing TRIM support for all RAID volumes in a future release”

    Looks like we'll have to wait a little longer for TRIM through RAID, but there *are* other SSD-specific improvements in this new driver.

    *** END UPDATE ***

    MRFS
  • chrcoluk - Saturday, June 19, 2010 - link

    Ok my thoughts.

    1 - you written of pcie v1 however failed to notice or mention that the plx chip uses pci-e 1x lanes from the p55 chipset so clearly pci-e 1.0 can supply the bandwidth if utilised properly, the plx chip transfers 4 1.0 lanes into 2 virtual 2.0 lanes for the sata6g and usb3.
    2 - some p55 boards, mine noticebly have a pci 2.0 slot fed of the p55 chipset @ x4 speed. Seems reviewers have got something wrong or are they claiming asus have it wrong? Even if we assume its actually pci 1.0 x4 that is still enough bandwidth to feed a sata 6g controller. Indeed the onboard plx which you praised sacrifices this x4 pci-e slot and uses those 4 lanes to feed itself. My thoery is the U3S6 card asus sell will perform the same as the onboard plx in a x4 slot but no reviewer has tested this properly.
    3 - whats the reason you did not test both gigabytes onboard and the lower asus onboard which borrow bandwidth from the primary pci-e x16 lanes, I am looking for tests of those in both turbo/levelup and normal mode.
  • gimespace - Tuesday, August 8, 2017 - link

    Try enabling DirectGMA with maximum GPU Aperture in Amd catalyst control center. It does not only make the graphics card faster but allowed me to get up to the maximum 560mb/s read speed for my ssd!

Log in

Don't have an account? Sign up now