Not bad but still the 128gb or the 64gb which are most feasible for the majority, r kinda slower than their 256 and 512 parts... Lets hope Sandforce raises the bar even further... so hopefully prices go down at least...
I think the magical point for me is when I can get a 256GB drive for under $200, so we're about one more generation away from the point where I can easily recommend SSDs to anyone and everyone. Right now, I'm good with 120GB for the OS drive, but then I still need a data drive, and there are many laptops where you simply don't get a second drive slot. I can live on 120GB and 160GB drives, sure, but when my email folder and documents folder suck down 40GB, I'd prefer twice that much capacity. :-)
Yeah, but when we get there I would worry about the longevity of the drive. At NAND smaller than ~18nm we would likely be looking at ~1000 program/erase cycles and there's no way I would buy a SSD with (likely would be) a 1 year warranty. It would be hard as a consumer/enthusiast to get behind. That's unless OS's (windows) account for SSD's more than 7 does now by making an effort to greatly reduce the program/erase cycles.
Once NAND starts to die, the drive just steadily loses storage, right?
That's another reason to get a 256GB drive when you only need 160-200GB of space. Once the drive starts to kick the bucket, you don't feel the crunch for a while.
No, the disk will never appear to lose space. It will just work fine until it suddenly fails. That would cause all sorts of problems with partitions and file systems if the disk suddenly started shrinking...
Early hard drives had this problem. It was solved by allocationg the bad sectors as 'unavailable' in the File System's record keeping system.
Later IDE drives hid this problem by dynamically allocating 'spare' sectors that are normally hidden from the File System whenever a bad block is detected.
SSDs that have a gradual fail mode will simply implement the bad sector marking that was used with RLL drives. This causes a File System friendly degradation of capacity until the boot block goes bad.
I was just about to say the same thing. I want to install all my Steam games and not worry about space problems. Actual data is obviously not a problem with <$70/TB platter storage.
The thing that also eats me up is the performance improvement on higher capacity drives. It's more of a principle thing. If I pony up the money for an SSD, I want 100% of the controller's performance. It doesn't seem fair that more capacious drives are quicker.
A principle thing? You do realize the reason smaller drives tend to be slower is because there are fewer NAND chips, which results in fewer channels to write to, right? It's not being throttled.
well ur right but also... it says they put only 128mb ram on the 64gb drive instead of 256mb like the rest of the line... not sure how much impact that has... anyone?
Thats not their fault though. Just like 2TB standard hard drives have better performance characteristics than an equivalent smaller drive, its just physics. It happens for different reasons in HDD's than SSD's, but the point is the same.
Wasn't Seagate or someone talking about hybrid drives. It would be nice to have about 64G flash for OS/applications and a 320G platter for data. The best of both worlds. This summer I bought a 7200rpm 320G laptop drive for $50. Kind of hard to justify spending 4x more for less storage even though it is faster.
There's a factory installed 128GB SSD boot drive in our new server, and I've installed the same size in one of our workstations for evaluation. Since all work files are on the server drive the workstation only needs enough capacity for the OS and applications, plus a bit of casual storage for personal files.
Now I'd like to put an SSD in my laptop to replace the dog-slow 5400 rpm original drive, but the present capacity increments don't work well. For a main computer the (relatively) affordable 256GB isn't enough, but 512GB is a big jump way off into second mortgage price territory. Without a second bay — yeah I know, there are ways — for a conventional data drive to supplement the SSD boot drive it's hard to have this make any kind of sense.
Remember when 320GB was sort of a capacity vs. price sweet spot for laptop drives?
As i said in all previous SSD Article. Raw Speed, and Random I/O is only part of the performance equation. There are still unknown inside. Toshiba SSD manage to perform so well is something we can yet explain.
Although looking at pure numbers, they are not very attractive. Interesting is how IMFT is making 25nm NAND for everyone else and yet Intel's own G3 SSD is not anywhere to be found.
Then there is the scary problem of NAND dropping RW cycle with every node. It is getting ridiculously small number.
At least im makes sense for crucial/micron to use 25nm IMFT flash but how did ocz end up using it in their 25nm edition vertex 2s before anyone else. I would have expected Anand to report on 25nm drives that are out already even though they are not faster or cheaper.
You would struggle to notice any difference if it was installed on the Intel controller or the 6GB/s controller. Its not a real 6GB/s drive, they dont seem to exist yet.
I think what he means, assuming I understand properly, is that with SATA-3Gbps, we have drives that top out at 280MB/s. By extension, one would expect SATA-6Gbps drives to reach 560MB/s, assuming overhead scales the same. I agree that 415MB/s is not impressive in this regard.
For a 100% increase in theoretical bandwidth, getting a drive with only 50% greater bandwidth is disappointing. Is this due to too few channels in the C-series? Is there that much overhead with SATA-6Gbps?
Either way, it seems that the only way to achieving truly greater speed is PCIe with a REVO-style RAID-0 SSD.
If there were already drives that could saturate SATA-6Gbps users would be crying for an even faster standard immediately. SATA-3Gbps still isn't really pushed by mechanical HDDs and wasn't close to saturated before SSDs, so I think it is a good thing the standard isn't maxed out yet.
Yeah, we wouldn't want technology that was awesome and too fast or something...
</sarcasm>
For real, I will wait for PCIe drives to drop in price. It's clear that 6Gbps isn't going to cut it in the long run - even when it is eventually saturated.
Five years, yes it should. I've seen research and studies that indicate that the less NAND is written to, the longer it lasts. IIRC, 10 years of data retention should be simple with any modern NAND device. To what extent 10 years of retention is affected by addition writes or rewrites, I don't have hard numbers.
Well, SATA is 7 or so years old at this point. IDE (from 1986 according to wikipedia) got upgraded like 9 times before it started to disappear and yet you can still buy IDE USB drive enclosures.
I think you'd be fine. Besides, I'm sure someone's got to have a 5 year old computer kicking around in 10 years.
Not necessarily. NAND cells loose their charge over time. Depending on temperature I would expect something like 3-5 years from a 25nm consumer grade NAND.
Wasn't one of the points for the ONFI chips to keep the same speed for the lowend drives as the ones with more capacity? 128gb one is much slower, you must be kiddin right?
The ability to match a SSD with a HDD with the OS smart enough to automatically manage the two drives into a single drive letter. For example, I have a OCZ agility 60 that I use with a 7200rpm 750gb HDD, but I'd like Windows to have the ability to bridge the two drives together as a single drive letter. If the OS would manage the SSD with an eye towards extending drive life while knowing what data to put on the HDD, it would make my life a little easier. To me, having a SSD with a mechanical HDD is the way to go. A 1TB SSD is $3000. A 1TB HDD is never going to be anywhere near as fast as an SSD. So I need both, but manually managing both drives with Windows 7 gets to be a PITA. Some ability to pass the management of both drives to the OS would be a big step up. 1TB SSDs aren't going to be cheap for another 3 or 4 years.
What I am reading in your post is that you do not want to separate your storage into C: & D:
Instead you want to separate your storage into C:[s] and C:[d]
Either way you have to have some way to tell the OS which of the 2 storage blocks a particular piece of data goes on. If the OS is smart enough to split storage intelligently between 2 subdirectories, then it should be smart enough to split storage intelligently between 2 drive letters.
If not then you could use a modern implementation of the DOS JOIN command that allows you to give the physical devices a subdirectory designation so you could assign storage to the specific hardware even though they share the same device designation.
The Unix compatible filesystems offer this sort of functionality already. All devices become subdirectories in the filesystem when they are mounted. Then you simply direct which subdirectory various items are stored in. You have the same management duties in terms of telling the system which data goes where, but in Unix compatible systems you don't use the drive letter, instead you use the drive directory.
Anand has another article about the Z68 chipset and he mentioned something that looks like what you wanted to find?
"This sounds like the holy grail of SSD/HDD setups, where you have a single drive letter and the driver manages what goes on your SSD vs. HDD. Whether SSD Caching is indeed a DIY hybrid hard drive technology remains to be seen."
I haven't seen this available for Windows, but there was an article somewhere on the Internet of a guy who built a Linux server using Nexenta and ZFS. They were using SSD drives as a cache for all their RAID hard drives.
I was a big fan of the C300, but I have to say that I am very disappointed with the lack of tech support by Crucial/Micron for the C300. The latest 0006 firmware has had many reports of stuttering on various platforms. After 2 months of users reporting this bug, the company has yet to officially even acknowledge a problem, let alone issue a fix. Take a look at this thread:
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
39 Comments
Back to Article
softdrinkviking - Tuesday, January 4, 2011 - link
speaking of storage visions, have you seen anything about cleversafe technology in implementation?AnotherGuy - Tuesday, January 4, 2011 - link
Not bad but still the 128gb or the 64gb which are most feasible for the majority, r kinda slower than their 256 and 512 parts... Lets hope Sandforce raises the bar even further... so hopefully prices go down at least...JarredWalton - Tuesday, January 4, 2011 - link
I think the magical point for me is when I can get a 256GB drive for under $200, so we're about one more generation away from the point where I can easily recommend SSDs to anyone and everyone. Right now, I'm good with 120GB for the OS drive, but then I still need a data drive, and there are many laptops where you simply don't get a second drive slot. I can live on 120GB and 160GB drives, sure, but when my email folder and documents folder suck down 40GB, I'd prefer twice that much capacity. :-)tipoo - Tuesday, January 4, 2011 - link
Agreed, 256GB for 200 dollars is where I would gladly jump in.anactoraaron - Tuesday, January 4, 2011 - link
Yeah, but when we get there I would worry about the longevity of the drive. At NAND smaller than ~18nm we would likely be looking at ~1000 program/erase cycles and there's no way I would buy a SSD with (likely would be) a 1 year warranty. It would be hard as a consumer/enthusiast to get behind. That's unless OS's (windows) account for SSD's more than 7 does now by making an effort to greatly reduce the program/erase cycles.But I'm lovin my F40 as a boot drive.
ImSpartacus - Tuesday, January 4, 2011 - link
Once NAND starts to die, the drive just steadily loses storage, right?That's another reason to get a 256GB drive when you only need 160-200GB of space. Once the drive starts to kick the bucket, you don't feel the crunch for a while.
ajp_anton - Tuesday, January 4, 2011 - link
Problem is that if the controlled has done its job well, pretty much all of the drive will die at once.extide - Wednesday, January 5, 2011 - link
No, the disk will never appear to lose space. It will just work fine until it suddenly fails. That would cause all sorts of problems with partitions and file systems if the disk suddenly started shrinking...Fritzr - Wednesday, January 5, 2011 - link
Early hard drives had this problem. It was solved by allocationg the bad sectors as 'unavailable' in the File System's record keeping system.Later IDE drives hid this problem by dynamically allocating 'spare' sectors that are normally hidden from the File System whenever a bad block is detected.
SSDs that have a gradual fail mode will simply implement the bad sector marking that was used with RLL drives. This causes a File System friendly degradation of capacity until the boot block goes bad.
ImSpartacus - Tuesday, January 4, 2011 - link
I was just about to say the same thing. I want to install all my Steam games and not worry about space problems. Actual data is obviously not a problem with <$70/TB platter storage.The thing that also eats me up is the performance improvement on higher capacity drives. It's more of a principle thing. If I pony up the money for an SSD, I want 100% of the controller's performance. It doesn't seem fair that more capacious drives are quicker.
Reikon - Wednesday, January 5, 2011 - link
A principle thing? You do realize the reason smaller drives tend to be slower is because there are fewer NAND chips, which results in fewer channels to write to, right? It's not being throttled.7Enigma - Wednesday, January 5, 2011 - link
I was just about to post the same thing....for once it's not a marketing decision, it's a design consequence.AnotherGuy - Wednesday, January 5, 2011 - link
well ur right but also... it says they put only 128mb ram on the 64gb drive instead of 256mb like the rest of the line... not sure how much impact that has... anyone?tipoo - Wednesday, January 5, 2011 - link
Thats not their fault though. Just like 2TB standard hard drives have better performance characteristics than an equivalent smaller drive, its just physics. It happens for different reasons in HDD's than SSD's, but the point is the same.fic2 - Wednesday, January 5, 2011 - link
Wasn't Seagate or someone talking about hybrid drives. It would be nice to have about 64G flash for OS/applications and a 320G platter for data. The best of both worlds. This summer I bought a 7200rpm 320G laptop drive for $50. Kind of hard to justify spending 4x more for less storage even though it is faster.NCM - Wednesday, January 5, 2011 - link
There's a factory installed 128GB SSD boot drive in our new server, and I've installed the same size in one of our workstations for evaluation. Since all work files are on the server drive the workstation only needs enough capacity for the OS and applications, plus a bit of casual storage for personal files.Now I'd like to put an SSD in my laptop to replace the dog-slow 5400 rpm original drive, but the present capacity increments don't work well. For a main computer the (relatively) affordable 256GB isn't enough, but 512GB is a big jump way off into second mortgage price territory. Without a second bay — yeah I know, there are ways — for a conventional data drive to supplement the SSD boot drive it's hard to have this make any kind of sense.
Remember when 320GB was sort of a capacity vs. price sweet spot for laptop drives?
anandskent - Tuesday, January 4, 2011 - link
...waiting for the SF-2000 to come out and playRGrizzzz - Tuesday, January 4, 2011 - link
Are those prices for the 2.5", 1.8" or same for both?iwodo - Tuesday, January 4, 2011 - link
As i said in all previous SSD Article. Raw Speed, and Random I/O is only part of the performance equation. There are still unknown inside. Toshiba SSD manage to perform so well is something we can yet explain.Although looking at pure numbers, they are not very attractive. Interesting is how IMFT is making 25nm NAND for everyone else and yet Intel's own G3 SSD is not anywhere to be found.
Then there is the scary problem of NAND dropping RW cycle with every node. It is getting ridiculously small number.
semo - Wednesday, January 5, 2011 - link
At least im makes sense for crucial/micron to use 25nm IMFT flash but how did ocz end up using it in their 25nm edition vertex 2s before anyone else. I would have expected Anand to report on 25nm drives that are out already even though they are not faster or cheaper.AnnonymousCoward - Tuesday, January 4, 2011 - link
How about an estimate on the BOM cost of that board?PeterO - Wednesday, January 5, 2011 - link
When NAND cell dies, does the controller disable the cell's entire logical block of cells? thxcactusdog - Wednesday, January 5, 2011 - link
You would struggle to notice any difference if it was installed on the Intel controller or the 6GB/s controller. Its not a real 6GB/s drive, they dont seem to exist yet.AnnihilatorX - Wednesday, January 5, 2011 - link
What are you on about, even C300 can use SATA 6Gb/sSATA 3Gb/s won't sustain 400MB/s sequential reads quoted in the article.
http://i1190.photobucket.com/albums/z449/c300-revi...
from
http://www.overclock.net/ssd/859715-crucial-realss...
therealnickdanger - Wednesday, January 5, 2011 - link
I think what he means, assuming I understand properly, is that with SATA-3Gbps, we have drives that top out at 280MB/s. By extension, one would expect SATA-6Gbps drives to reach 560MB/s, assuming overhead scales the same. I agree that 415MB/s is not impressive in this regard.For a 100% increase in theoretical bandwidth, getting a drive with only 50% greater bandwidth is disappointing. Is this due to too few channels in the C-series? Is there that much overhead with SATA-6Gbps?
Either way, it seems that the only way to achieving truly greater speed is PCIe with a REVO-style RAID-0 SSD.
strikeback03 - Wednesday, January 5, 2011 - link
If there were already drives that could saturate SATA-6Gbps users would be crying for an even faster standard immediately. SATA-3Gbps still isn't really pushed by mechanical HDDs and wasn't close to saturated before SSDs, so I think it is a good thing the standard isn't maxed out yet.therealnickdanger - Wednesday, January 5, 2011 - link
Yeah, we wouldn't want technology that was awesome and too fast or something...</sarcasm>
For real, I will wait for PCIe drives to drop in price. It's clear that 6Gbps isn't going to cut it in the long run - even when it is eventually saturated.
Spivonious - Wednesday, January 5, 2011 - link
If I write a bunch of data to one of these, and then stick it in a closet for 5 years, will my data still be readable?therealnickdanger - Wednesday, January 5, 2011 - link
Five years, yes it should. I've seen research and studies that indicate that the less NAND is written to, the longer it lasts. IIRC, 10 years of data retention should be simple with any modern NAND device. To what extent 10 years of retention is affected by addition writes or rewrites, I don't have hard numbers.rickcain2320 - Wednesday, January 5, 2011 - link
Your bigger problem in 5 years will most likely be plugging the thing back in because its interface is obsolete and nobody supports it anymore.evilspoons - Wednesday, January 5, 2011 - link
Well, SATA is 7 or so years old at this point. IDE (from 1986 according to wikipedia) got upgraded like 9 times before it started to disappear and yet you can still buy IDE USB drive enclosures.I think you'd be fine. Besides, I'm sure someone's got to have a 5 year old computer kicking around in 10 years.
Mr Alpha - Wednesday, January 5, 2011 - link
Not necessarily. NAND cells loose their charge over time. Depending on temperature I would expect something like 3-5 years from a 25nm consumer grade NAND.Olternaut - Wednesday, January 5, 2011 - link
So the limit of the drive is 72 TB of writes correct? How does that compare to the writing life of an old fashioned platter based hard drive?dangerz - Wednesday, January 5, 2011 - link
Wasn't one of the points for the ONFI chips to keep the same speed for the lowend drives as the ones with more capacity? 128gb one is much slower, you must be kiddin right?ckryan - Wednesday, January 5, 2011 - link
What I want --The ability to match a SSD with a HDD with the OS smart enough to automatically manage the two drives into a single drive letter. For example, I have a OCZ agility 60 that I use with a 7200rpm 750gb HDD, but I'd like Windows to have the ability to bridge the two drives together as a single drive letter. If the OS would manage the SSD with an eye towards extending drive life while knowing what data to put on the HDD, it would make my life a little easier. To me, having a SSD with a mechanical HDD is the way to go. A 1TB SSD is $3000. A 1TB HDD is never going to be anywhere near as fast as an SSD. So I need both, but manually managing both drives with Windows 7 gets to be a PITA. Some ability to pass the management of both drives to the OS would be a big step up. 1TB SSDs aren't going to be cheap for another 3 or 4 years.
Fritzr - Wednesday, January 5, 2011 - link
What I am reading in your post is that you do not want to separate your storage into C: & D:Instead you want to separate your storage into C:[s] and C:[d]
Either way you have to have some way to tell the OS which of the 2 storage blocks a particular piece of data goes on. If the OS is smart enough to split storage intelligently between 2 subdirectories, then it should be smart enough to split storage intelligently between 2 drive letters.
If not then you could use a modern implementation of the DOS JOIN command that allows you to give the physical devices a subdirectory designation so you could assign storage to the specific hardware even though they share the same device designation.
The Unix compatible filesystems offer this sort of functionality already. All devices become subdirectories in the filesystem when they are mounted. Then you simply direct which subdirectory various items are stored in. You have the same management duties in terms of telling the system which data goes where, but in Unix compatible systems you don't use the drive letter, instead you use the drive directory.
Theokrat - Saturday, January 8, 2011 - link
Anand has another article about the Z68 chipset and he mentioned something that looks like what you wanted to find?"This sounds like the holy grail of SSD/HDD setups, where you have a single drive letter and the driver manages what goes on your SSD vs. HDD. Whether SSD Caching is indeed a DIY hybrid hard drive technology remains to be seen."
http://www.anandtech.com/show/4083/the-sandy-bridg...
I haven't seen this available for Windows, but there was an article somewhere on the Internet of a guy who built a Linux server using Nexenta and ZFS. They were using SSD drives as a cache for all their RAID hard drives.
ST Nathan - Wednesday, January 19, 2011 - link
What you want already exists. HyperDuo: http://ces.cnet.com/8301-32254_1-20027657-283.htmlTarrant1701 - Monday, January 24, 2011 - link
I was a big fan of the C300, but I have to say that I am very disappointed with the lack of tech support by Crucial/Micron for the C300. The latest 0006 firmware has had many reports of stuttering on various platforms. After 2 months of users reporting this bug, the company has yet to officially even acknowledge a problem, let alone issue a fix. Take a look at this thread:http://forum.crucial.com/t5/Solid-State-Drives-SSD...
If this the kind of backing Crucial/Micron puts into a product, I won't care how good the C400 is ... I'll be taking my business elsewhere!