I wonder if they actually did anything different with the drives or if it is the same drive with just a different firmware...
I am hoping to replace my Unraid server next year with a 8-10 drive FreeNAS ZFS box using Raidz2 in one of the lovely Nanoxia Deep Silence 1 cases, although I think that will only hold 8 drives as it ships.
I will have to see how these drives play out to decide if I use these or the WD Red drives. For those wondering I am probably going to switch from Unraid to FreeNAS because the speed is horrible in Unraid, especially the write speed unless you an unsecured cache drive. Unraid was quite nice at the time though because you can add capacity at any time and I didn't have the money to buy 8-10 drives all in one shot.
Well, if he can't pick up a DS1 he could always look at the Fractal Design Define R4 or XL R2. Actually the R4 is current $80 on NewEgg after a promo code. Seems like another good option. But all the power to you if you can your DS1.
Side note, I am in the market for some non-enterprise NAS drives so this is of interest to me. I know the Reds have been around for awhile and there have been some quality issues, but there are zero points of reference for these Seagate drives. Decisions, decisions.
I would caution against buying R4 right now, that promo code signals that an R5 is due in a few month and stores are trying to clear their R4 stock. R4 debuted last year around July maybe wait a few weeks.
It's gotten to that price point before a couple of months ago. If R5 was to be out, they would've announced it at computex. Besides the R3 was made available 2 years before the R4 or so and the R4 has only been out around a year or slightly less than.
Well, even if they release an R5 it can't be that groundbreaking compared to an R4 and I'm sure price-wise it would start out where the R4 started ($120). I actually just checked Amazon and they have it for $80 straight-up. The R4 is well worth it for my needs regardless, but I'll look forward to seeing what they can do with the R5.
Yep, Unraid can be very slow (~20MB/s writes). As you alluded to, it is truly a budget alternative though. If you have the money to spend on the better hardware, FreeNAS looks like a better option if you desire better performance. That said, it doesn't look nearly as flexible in terms of adding storage. For me, I'll probably stick with Unraid for the time being. It is fast enough for home use.
Speed is not the only advantage of FreeNAS. I am considering the switch because of all of the ZFS awesomeness. Things like Raidz2/3 - having multiple parity drives, build in file checksums, encryption, etc. Although the encryption that FreeNAS has is not the ZFS method since Sun/Oracle stop sharing the source for ZFS after ZFS version 28. :(
I really wish ZFS would have been licensed differently so that is could be merged into the Linux kernel. Yes I know you can use ZFS with Linux, but not in a really good way.
I just checked, there are kernel modules and source patches for the Linux kernel for ZFS. The only limitation that the licensing issue bring, is that you cannot distribute a ZFS patched Linux kernel.
Compiling your own kernel for a NAS is quite trivial, so patching ZFS into it shouldn't be too much of an issue. The Linux implementation should be based very closely on the native one, (C code after all) with just some minor changes to adapt to different treatment of block devices. Should be much faster than FUSE, which sounds what you are describing. While still not quite as fast as the native Unix version, the delta isn't as crippling as it used to be. I think I might give it a try for my backup array, some time.
You don't need to compile the kernel to use native ZFS on Linux. There are repositories for that (the package uses DKMS to compile the kernel module so that's all automatic, it even recompiles if you install a new kernel), so installing zfs is trivially easy. For example, on Ubuntu, I believe you just do:
I have been running ZFS on Linux using the native built kernel modules method (not FUSE) and it has been great. Check out www.zfsonlinux.com. I am running in on Ubuntu server, so I am doing everything manually, but there is at least a PPA archive so you can install zfs with apt-get .. it really doesnt get much easier than that...
> "ZFS ... doesn't look nearly as flexible in terms of adding storage" You're correct, once a raidz zpool is created (consisting of a set of disks) it cannot accept additional disks.
It's largely suspected that both these and WD's NAS drives are their cheapest commodity drives with slightly different firmware. They're no more or less reliable, but cost one hell of a lot more.
It's very difficult to recommend these for any home-use scenario. Just get the cheapest drives you can find. Even the drives that ship in external enclosures work fine in FreeNAS. It's very likely they're the same hardware.
What about Time Limited Error Recovery (TLER)? Isn't it recommended to use TLER drives for ZFS? Or is there an option in FreeNAS to tell it to wait longer on the drive?
I've built a number of FreeNAS boxes with commodity drives. FreeNAS doesn't need TLER, but some users do flash new firmware. I haven't found any need to do so.
Most of the drives for the systems I built were pulled from external enclosures. The drives Seagate and WD package into external USB enclosures are much cheaper than their bare drives and seem to be mechanically identical.
When the Thai floods occurred, one nascent cloud vendor hit Costco's all over the country to buy up external enclosure drives before the prices really spiked. The hard-drives themselves are the same.
TLER is important for RAID because RAID will automatically jump to the conclusion that the drive is bad if it fails to respond within a certain amount of time (which is where TLER comes in), and drop the drive from the array.
ZFS does parity error checking on a per file basis. That is, if a drive fails to respond while reading a file, ZFS will mark the file as bad. (Actually it will also automatically heal the file, but that digresses into the goodness of ZFS over RAID). Only if it detects a pattern of failures indicative of a drive failure will it mark a drive as bad. A single delayed response while reading a file won't drop a drive. If your drive has TLER, it's generally recommended to turn it off with ZFS.
And even dropping a drive is not that big a deal for ZFS. I had to fiddle with an intermittent SATA cable for a couple months when I put together my ZFS box. It caused a drive to drop about once a week while I was tracking down the problem. Every time I fixed it, ZFS noticed the "new" drive had the original parity data on it and it simply used it. I did run a sync (called a scrub), which reads every file to check its integrity, That added parity data for the few files which were written while the drive was dropped. But all the other data on the dropped drive didn't need to be rewritten like it generally has to be with a RAID rebuild. That's because ZFS thinks of parity in terms of files, not in terms of drives like RAID does. Heck, if you want, you can create a parity-based ZFS volume on a single drive. It won't protect against a drive failure, but it would protect files against bit-rot on the drive.
Parity and checksums (and most everything else like snapshots and deduplication) in ZFS don't work on the file level, they work on the block level. It's possible that (especially with ZFS using dynamic block sizes) that your file fits entirely inside a single block, but it's not guaranteed.
SPCR tested and concluded that WD's Red drives are the quietest, least vibration prone 3.5" drives available. As a plus, they consume less power and don't have the ridiculously aggressive and noisy headparking seen in 'green' drives.
As someone whose speed needs are taken care of by SSDs and who is more than willing to buy 2.5" disk drives just for the silence & efficiency, I can't imagine getting a non-Red (or at least a drive that's been proven to be competitive).
Honestly, flash-based storage is good enough for most workloads nowadays, so I think it's time for disk drives to do something useful by being neither seen nor heard.
The thing is. While TLER is "bad" in a raid there's another angle to consider with it. If a drive DOESN'T respond in a reasonable time - that's probably a drive with a bad block having trouble reading. Let's say another drive in the same array outright drops dead... Come recovery time (using a raid 5 example here but it works with any raid method using a calculated parity and n-1 storage space) you try and rebuild the raid... that same bad block (hidden by the "NAS friendly" TLER setting) now means your array is toast.
I've been using a bunch of spinpoint F3's in a raid-5 for about.... 3 years without any issues. Assuming most consumer drives have a "drive head parking" setting (which they do) or outright "spin down" idle settings there's no real reason why a regular drive should struggle to do 24/7.
That's not how it's supposed to work. The TLER drive with the bad block will time out early, the RAID will detect this and try to repair it by writing the block again calculated from parity. If it's an isolated bad block the drive will remap the block to a good block. If it fails to write the block, the drive is then dropped.
Just an FYI, the next version of Unraid is going to include a feature that will allow you to use multiple disks/SSDs in a Btrfs "cache pool" which will provide for fault tolerance. Add two or three cheap laptop disks or SSDs and you'll get really high speed reads and writes that will also protect your data from disk failures before it's moved from the cache to the array.
The reason I don't buy/recommend Seagate drives in the last 5 years is because they 'fixed' the problems customers were having with drive reliability by cutting their product warranties down to 1 year from 3 - 5. They've recently bumped their warranties back up to 2 years on some drives.
Let's hope Seagate differentiates these drives with a compelling price point, at least a 3 year warranty, and a solid launch backed up by positive reviews.
Agreed, warranty is everything when it comes to mechanical disk drives. Kudos to Seagate for bringing something to the table to compete with the WD Reds. The pricing is competitive but I may just go for the 4TB 7200rpm Seagate regardless for my Synology NAS drive upgrade.
Long drive warranties are just insurance policies. To buy a drive with double the warranty, you'll typically pay at least 50% more.
Consider the cheapest drives you can buy, with 1 year warranties. By the time that drive fails, on average, you'll typical be able to buy a brand new drive more cheaply than had you bought a drive with a longer warranty.
Buy 10 drives today for $1500 ($150 ea) or buy 10 drives today for $1000 ($100 ea). The average failure period of these drives (when kept in a well ventilated, stationary enclosure with stable power) will be the same, and is typically quite long. It's hard to say for sure, but my anecdotal guess would be 3 to 5 years of constant use. Some will fail earlier, some very much later.
At least one will probably fail in the 1-year warranty period, and that will be covered. How many will fail in the 2 years following the warranty expiration? Two? Three? Even if it's that many, they'll probably cost $75 in 2 years, $50 in 3.
For most users, long warranties just don't make a lot of economic sense. As far as I can tell, these NAS drives are just the cheap commodity drives with slightly different firmware and an insurance policy. Unless you really need TLER, go with the cheaper drives.
I have been using only WD for my spinning drives for the past few years. I have been lucky and haven't had a drive failure in several years with my Unraid box. The last drive I had fail was a Segate 320 GB drive several years ago.
Seagate is an instant pass. Decade ago they were great, then they purchase a slew of low quality HDD manufacturers (i.e. Maxtor) and their quality went through the toilet. All my seagates have had problems, not one problem with WD. As always your experience will vary,
In my experience, the overall quality differences between Seagate and WD are very small. Both brands have lemons, if you've personally been burned by one brand or the other, you're not going to be happy. But that doesn't mean that brand is worse or better, it only means you were unlucky.
There is no "Consumer Reports" for hard drive quality, so nearly everything written about either manufacturer is anecdotal. The drive recovery techs I know say they don't see any major quality difference between the brands, and I believe them. Google said much the same in their now-dated hard drive analysis paper.
From my experience, what really kills drives is keeping them in those terrible fanless external enclosures (like the ones sold by WD and Seagate), moving them around, and not keeping them isolated from power spikes.
Unfortunately, it's not widely known, you need to submit a data point to get access to the database, and the site has dropped in popularity over the last 5 years so data on current drives is very sparse (as in useless because they won't show statistics until they get a large enough data set).
But back in the late 1990s/early 2000s it offered some fascinating insight into HDD quality. The most important being that brand doesn't really matter. Yeah some brands tended to be somewhat better than others (e.g Quantum was one of the best, with nearly all their drives scoring above the 50th percentile).
But every manufacturer had lemons and all-stars. In other words, the model of the drive mattered a lot more than the manufacturer. The IBM 75GXP (aka the deathstars) were some of the least reliable drives in the survey. But the model that replaced it was one of the most reliable.
I think my statement stands. If the site has been mostly worthless for half a decade, it's no "Consumer Reports" for hard drives.
One gathers the large data center operators have all this data and more, but they're not sharing. Not that it really seems to matter. There are only two major manufacturers left, and they seem to be producing quite similar hardware, both in features and longevity.
There's no mention anywhere of the warranty on these drives. Do we assume they'll match Western Digital Red's 3 years, or the rest of their own lineup's 1-2 years?
My interpretation of that is that power-on hours is simply an assumption used to calculate other reliability metrics (on that post, it was used to calculate AFR). However, given that this table doesn't list AFR, I'm not sure exactly why they would list power-on hours. Hopefully someone else understands this better.
if you look at the non NAS models, they list 2400 hours. This figure is what they suggest should be the annual usage (8 hours per day) for the claimed reliability. It's a goofy spec to list. You can (and I do) read that as a lack of confidence in running it 7x24. With this new line, they're giving an annual use expectation of 7x24. I have been waiting for a 4tb Red. We'll see if it exists by the time I next need some drives. Currently I'm using a large batch of 2s. Could cut it in half or double my storage with the next generation.
They recently removed TLER from their standard consumer drives, and are now selling it at a premium and giving the feature a fancy marketing name. Same as WD did some years ago. Likely the same drives for the most part, even same firmware, with just some config bits flipped.
There are firmware updates available to flash TLER firmwares onto Seagate consumer drives.
Of course, Seagate doesn't make it easy to determine which firmware is compatible with which drives. Nor does Seagate list these firmwares in any easy to find location.
More oddly, there doesn't seem to be a vibrant online community discussing these updates in any great detail. One has to Google drive model numbers and hunt around user forums.
My consumer drives aren't giving me any problems in the multiple FreeNAS systems I've built. I'm not going to fix something that's not broken. If I do start having problems with dropped drives, I'll definitely try the firmware updates.
The 4TB drives are interesting to me. I have four 2TB drives in RAID-5. In less than two years I have quickly run low on storage from all the movies/TV shows recorded to it. Might jump on these if they go on sale. The regular Seagate 4TB drives have been as low as $150.
I have a mix of mostly WD and a few Seagate/Samsung drives. I have had two drives die on me which were WD. WD is great about replacing the drives. Both instances they gave me an upgraded version of the drive with a higher capacity under warranty.
Remember when you guys reported on Seagate making a density breakthrough and talked about 6TB hard drives. That was at least a year ago, so what's up with that? I want to be able to buy 2 6TB hdd's for a total of $300. Looking like that's gonna take another 2 years at this point...
Seagate drives are the most unreliable drives ever. I'm speaking from 10+ years of datacenter management experience. My current datacenter has over 800TB of storage ranging from 100GB SLC SSD through 4TB 7.2krpm NLSAS drives (basically SATA drives with a SAS interface on it). We have HD's from Hitachi, Toshiba, WD, Seagate, and Fujitsu. For some reason (maybe cost) our SAN vendors tend to use seagate drives the most. about 75% of all our drives are seagate. We typically see around a 5% failure rate among seagate drives, while all of the other brands combined is less than 0.5%. For example in one SAN with 16 Seagate 1TB NLSAS drives, over the last 1.5 years, 6 of those drives have failed. While in another SAN with 48 1TB NLSAS Hitachi drives that have been running for over 6 years, only 2 drives have failed in that 6+ year period. Same capacity, same speed drives, same operating environment, same work load, yet seagate has 6 out of 16 fail within 1.5 years while hitachi has 2 out of 48 fail within 6+ years.
That's just one example, but overall, seagate drives have a 10x or higher failure rate than other manufacturers based on my experience.
Our SAN vendor has recently switched to toshiba drives. So far out of the dozen or so toshiba drives, none have failed. the oldest one has been running 24x7 for a year now.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
46 Comments
Back to Article
joel4565 - Tuesday, June 11, 2013 - link
I wonder if they actually did anything different with the drives or if it is the same drive with just a different firmware...I am hoping to replace my Unraid server next year with a 8-10 drive FreeNAS ZFS box using Raidz2 in one of the lovely Nanoxia Deep Silence 1 cases, although I think that will only hold 8 drives as it ships.
I will have to see how these drives play out to decide if I use these or the WD Red drives. For those wondering I am probably going to switch from Unraid to FreeNAS because the speed is horrible in Unraid, especially the write speed unless you an unsecured cache drive. Unraid was quite nice at the time though because you can add capacity at any time and I didn't have the money to buy 8-10 drives all in one shot.
JDG1980 - Tuesday, June 11, 2013 - link
Good luck finding the Deep Silence 1 if you live in the US.brshoemak - Tuesday, June 11, 2013 - link
Well, if he can't pick up a DS1 he could always look at the Fractal Design Define R4 or XL R2. Actually the R4 is current $80 on NewEgg after a promo code. Seems like another good option. But all the power to you if you can your DS1.Side note, I am in the market for some non-enterprise NAS drives so this is of interest to me. I know the Reds have been around for awhile and there have been some quality issues, but there are zero points of reference for these Seagate drives. Decisions, decisions.
sherlockwing - Tuesday, June 11, 2013 - link
I would caution against buying R4 right now, that promo code signals that an R5 is due in a few month and stores are trying to clear their R4 stock. R4 debuted last year around July maybe wait a few weeks.Jumpman23 - Tuesday, June 11, 2013 - link
It's gotten to that price point before a couple of months ago. If R5 was to be out, they would've announced it at computex. Besides the R3 was made available 2 years before the R4 or so and the R4 has only been out around a year or slightly less than.brshoemak - Tuesday, June 11, 2013 - link
Well, even if they release an R5 it can't be that groundbreaking compared to an R4 and I'm sure price-wise it would start out where the R4 started ($120). I actually just checked Amazon and they have it for $80 straight-up. The R4 is well worth it for my needs regardless, but I'll look forward to seeing what they can do with the R5.Dentons - Tuesday, June 11, 2013 - link
The R3 is better than most current cases. I wouldn't worry about the R5 being much more than a facelift on the R4.3DoubleD - Tuesday, June 11, 2013 - link
Yep, Unraid can be very slow (~20MB/s writes). As you alluded to, it is truly a budget alternative though. If you have the money to spend on the better hardware, FreeNAS looks like a better option if you desire better performance. That said, it doesn't look nearly as flexible in terms of adding storage. For me, I'll probably stick with Unraid for the time being. It is fast enough for home use.joel4565 - Tuesday, June 11, 2013 - link
Speed is not the only advantage of FreeNAS. I am considering the switch because of all of the ZFS awesomeness. Things like Raidz2/3 - having multiple parity drives, build in file checksums, encryption, etc. Although the encryption that FreeNAS has is not the ZFS method since Sun/Oracle stop sharing the source for ZFS after ZFS version 28. :(I really wish ZFS would have been licensed differently so that is could be merged into the Linux kernel. Yes I know you can use ZFS with Linux, but not in a really good way.
Rick83 - Tuesday, June 11, 2013 - link
I just checked, there are kernel modules and source patches for the Linux kernel for ZFS. The only limitation that the licensing issue bring, is that you cannot distribute a ZFS patched Linux kernel.Compiling your own kernel for a NAS is quite trivial, so patching ZFS into it shouldn't be too much of an issue. The Linux implementation should be based very closely on the native one, (C code after all) with just some minor changes to adapt to different treatment of block devices. Should be much faster than FUSE, which sounds what you are describing. While still not quite as fast as the native Unix version, the delta isn't as crippling as it used to be. I think I might give it a try for my backup array, some time.
Guspaz - Tuesday, June 11, 2013 - link
You don't need to compile the kernel to use native ZFS on Linux. There are repositories for that (the package uses DKMS to compile the kernel module so that's all automatic, it even recompiles if you install a new kernel), so installing zfs is trivially easy. For example, on Ubuntu, I believe you just do:# apt-add-repository --yes ppa:zfs-native/stable
# apt-get update
# apt-get install ubuntu-zfs
and that's it. Native kernel module ZFS is installed. Instructions for other distributions aren't all that different, I believe.
Thermopyle - Tuesday, June 11, 2013 - link
ZFS on Linux is a great solution for ... wait for it ... using ZFS on Linux!http://zfsonlinux.org/
extide - Tuesday, June 11, 2013 - link
I have been running ZFS on Linux using the native built kernel modules method (not FUSE) and it has been great. Check out www.zfsonlinux.com. I am running in on Ubuntu server, so I am doing everything manually, but there is at least a PPA archive so you can install zfs with apt-get .. it really doesnt get much easier than that...AlexFeren - Thursday, June 13, 2013 - link
> "ZFS ... doesn't look nearly as flexible in terms of adding storage"You're correct, once a raidz zpool is created (consisting of a set of disks) it cannot accept additional disks.
Dentons - Tuesday, June 11, 2013 - link
It's largely suspected that both these and WD's NAS drives are their cheapest commodity drives with slightly different firmware. They're no more or less reliable, but cost one hell of a lot more.It's very difficult to recommend these for any home-use scenario. Just get the cheapest drives you can find. Even the drives that ship in external enclosures work fine in FreeNAS. It's very likely they're the same hardware.
joel4565 - Tuesday, June 11, 2013 - link
What about Time Limited Error Recovery (TLER)? Isn't it recommended to use TLER drives for ZFS? Or is there an option in FreeNAS to tell it to wait longer on the drive?Dentons - Tuesday, June 11, 2013 - link
I've built a number of FreeNAS boxes with commodity drives. FreeNAS doesn't need TLER, but some users do flash new firmware. I haven't found any need to do so.Most of the drives for the systems I built were pulled from external enclosures. The drives Seagate and WD package into external USB enclosures are much cheaper than their bare drives and seem to be mechanically identical.
When the Thai floods occurred, one nascent cloud vendor hit Costco's all over the country to buy up external enclosure drives before the prices really spiked. The hard-drives themselves are the same.
bobbozzo - Wednesday, June 12, 2013 - link
FWIW, The external drives I have bought have often had only 1 or 2 year warranties, and the warranty is potentially voided if you open the case.Solandri - Tuesday, June 11, 2013 - link
TLER is important for RAID because RAID will automatically jump to the conclusion that the drive is bad if it fails to respond within a certain amount of time (which is where TLER comes in), and drop the drive from the array.ZFS does parity error checking on a per file basis. That is, if a drive fails to respond while reading a file, ZFS will mark the file as bad. (Actually it will also automatically heal the file, but that digresses into the goodness of ZFS over RAID). Only if it detects a pattern of failures indicative of a drive failure will it mark a drive as bad. A single delayed response while reading a file won't drop a drive. If your drive has TLER, it's generally recommended to turn it off with ZFS.
And even dropping a drive is not that big a deal for ZFS. I had to fiddle with an intermittent SATA cable for a couple months when I put together my ZFS box. It caused a drive to drop about once a week while I was tracking down the problem. Every time I fixed it, ZFS noticed the "new" drive had the original parity data on it and it simply used it. I did run a sync (called a scrub), which reads every file to check its integrity, That added parity data for the few files which were written while the drive was dropped. But all the other data on the dropped drive didn't need to be rewritten like it generally has to be with a RAID rebuild. That's because ZFS thinks of parity in terms of files, not in terms of drives like RAID does. Heck, if you want, you can create a parity-based ZFS volume on a single drive. It won't protect against a drive failure, but it would protect files against bit-rot on the drive.
Guspaz - Tuesday, June 11, 2013 - link
Parity and checksums (and most everything else like snapshots and deduplication) in ZFS don't work on the file level, they work on the block level. It's possible that (especially with ZFS using dynamic block sizes) that your file fits entirely inside a single block, but it's not guaranteed.mavere - Tuesday, June 11, 2013 - link
SPCR tested and concluded that WD's Red drives are the quietest, least vibration prone 3.5" drives available. As a plus, they consume less power and don't have the ridiculously aggressive and noisy headparking seen in 'green' drives.As someone whose speed needs are taken care of by SSDs and who is more than willing to buy 2.5" disk drives just for the silence & efficiency, I can't imagine getting a non-Red (or at least a drive that's been proven to be competitive).
Honestly, flash-based storage is good enough for most workloads nowadays, so I think it's time for disk drives to do something useful by being neither seen nor heard.
mercutiouk - Saturday, June 15, 2013 - link
The thing is. While TLER is "bad" in a raid there's another angle to consider with it. If a drive DOESN'T respond in a reasonable time - that's probably a drive with a bad block having trouble reading.Let's say another drive in the same array outright drops dead...
Come recovery time (using a raid 5 example here but it works with any raid method using a calculated parity and n-1 storage space) you try and rebuild the raid... that same bad block (hidden by the "NAS friendly" TLER setting) now means your array is toast.
mercutiouk - Saturday, June 15, 2013 - link
I've been using a bunch of spinpoint F3's in a raid-5 for about.... 3 years without any issues. Assuming most consumer drives have a "drive head parking" setting (which they do) or outright "spin down" idle settings there's no real reason why a regular drive should struggle to do 24/7.Gigaplex - Monday, August 12, 2013 - link
That's not how it's supposed to work. The TLER drive with the bad block will time out early, the RAID will detect this and try to repair it by writing the block again calculated from parity. If it's an isolated bad block the drive will remap the block to a good block. If it fails to write the block, the drive is then dropped.mrow - Tuesday, June 18, 2013 - link
Just an FYI, the next version of Unraid is going to include a feature that will allow you to use multiple disks/SSDs in a Btrfs "cache pool" which will provide for fault tolerance. Add two or three cheap laptop disks or SSDs and you'll get really high speed reads and writes that will also protect your data from disk failures before it's moved from the cache to the array.arthur449 - Tuesday, June 11, 2013 - link
The reason I don't buy/recommend Seagate drives in the last 5 years is because they 'fixed' the problems customers were having with drive reliability by cutting their product warranties down to 1 year from 3 - 5. They've recently bumped their warranties back up to 2 years on some drives.Let's hope Seagate differentiates these drives with a compelling price point, at least a 3 year warranty, and a solid launch backed up by positive reviews.
creed3020 - Tuesday, June 11, 2013 - link
Agreed, warranty is everything when it comes to mechanical disk drives. Kudos to Seagate for bringing something to the table to compete with the WD Reds. The pricing is competitive but I may just go for the 4TB 7200rpm Seagate regardless for my Synology NAS drive upgrade.Dentons - Tuesday, June 11, 2013 - link
Long drive warranties are just insurance policies. To buy a drive with double the warranty, you'll typically pay at least 50% more.Consider the cheapest drives you can buy, with 1 year warranties. By the time that drive fails, on average, you'll typical be able to buy a brand new drive more cheaply than had you bought a drive with a longer warranty.
Buy 10 drives today for $1500 ($150 ea) or buy 10 drives today for $1000 ($100 ea). The average failure period of these drives (when kept in a well ventilated, stationary enclosure with stable power) will be the same, and is typically quite long. It's hard to say for sure, but my anecdotal guess would be 3 to 5 years of constant use. Some will fail earlier, some very much later.
At least one will probably fail in the 1-year warranty period, and that will be covered. How many will fail in the 2 years following the warranty expiration? Two? Three? Even if it's that many, they'll probably cost $75 in 2 years, $50 in 3.
For most users, long warranties just don't make a lot of economic sense. As far as I can tell, these NAS drives are just the cheap commodity drives with slightly different firmware and an insurance policy. Unless you really need TLER, go with the cheaper drives.
joel4565 - Tuesday, June 11, 2013 - link
I have been using only WD for my spinning drives for the past few years. I have been lucky and haven't had a drive failure in several years with my Unraid box. The last drive I had fail was a Segate 320 GB drive several years ago.lurker22 - Tuesday, June 11, 2013 - link
Seagate is an instant pass. Decade ago they were great, then they purchase a slew of low quality HDD manufacturers (i.e. Maxtor) and their quality went through the toilet. All my seagates have had problems, not one problem with WD. As always your experience will vary,Dentons - Tuesday, June 11, 2013 - link
In my experience, the overall quality differences between Seagate and WD are very small. Both brands have lemons, if you've personally been burned by one brand or the other, you're not going to be happy. But that doesn't mean that brand is worse or better, it only means you were unlucky.There is no "Consumer Reports" for hard drive quality, so nearly everything written about either manufacturer is anecdotal. The drive recovery techs I know say they don't see any major quality difference between the brands, and I believe them. Google said much the same in their now-dated hard drive analysis paper.
From my experience, what really kills drives is keeping them in those terrible fanless external enclosures (like the ones sold by WD and Seagate), moving them around, and not keeping them isolated from power spikes.
Solandri - Tuesday, June 11, 2013 - link
Actually, there is a "Consumer Reports" of HDD quality.http://www.storagereview.com/php/survey/survey_mfr...
Unfortunately, it's not widely known, you need to submit a data point to get access to the database, and the site has dropped in popularity over the last 5 years so data on current drives is very sparse (as in useless because they won't show statistics until they get a large enough data set).
But back in the late 1990s/early 2000s it offered some fascinating insight into HDD quality. The most important being that brand doesn't really matter. Yeah some brands tended to be somewhat better than others (e.g Quantum was one of the best, with nearly all their drives scoring above the 50th percentile).
But every manufacturer had lemons and all-stars. In other words, the model of the drive mattered a lot more than the manufacturer. The IBM 75GXP (aka the deathstars) were some of the least reliable drives in the survey. But the model that replaced it was one of the most reliable.
Dentons - Wednesday, June 12, 2013 - link
I think my statement stands. If the site has been mostly worthless for half a decade, it's no "Consumer Reports" for hard drives.One gathers the large data center operators have all this data and more, but they're not sharing. Not that it really seems to matter. There are only two major manufacturers left, and they seem to be producing quite similar hardware, both in features and longevity.
KITH - Tuesday, June 11, 2013 - link
They have since also bought quality manufacturers (Samsung) so their quality should be going up.wbwb - Tuesday, June 11, 2013 - link
There's no mention anywhere of the warranty on these drives. Do we assume they'll match Western Digital Red's 3 years, or the rest of their own lineup's 1-2 years?MadAd - Tuesday, June 11, 2013 - link
8760 Power on Hours/24 = 365 days. So what happens after 1 year?Chicken76 - Tuesday, June 11, 2013 - link
Good question! I too would like to know the answer.ElvenLemming - Tuesday, June 11, 2013 - link
A quick Google search found me this Seagate article: http://enterprise.media.seagate.com/2010/04/inside...My interpretation of that is that power-on hours is simply an assumption used to calculate other reliability metrics (on that post, it was used to calculate AFR). However, given that this table doesn't list AFR, I'm not sure exactly why they would list power-on hours. Hopefully someone else understands this better.
KITH - Tuesday, June 11, 2013 - link
They don't expect these drives to see 24/7 use. They are not rating them for heavy enterprise level workloads.wbwb - Tuesday, June 11, 2013 - link
Seagate does expect 24x7 use, they say so on their website which is linked at the bottom of the article.bsd228 - Wednesday, June 12, 2013 - link
if you look at the non NAS models, they list 2400 hours. This figure is what they suggest should be the annual usage (8 hours per day) for the claimed reliability. It's a goofy spec to list. You can (and I do) read that as a lack of confidence in running it 7x24. With this new line, they're giving an annual use expectation of 7x24. I have been waiting for a 4tb Red. We'll see if it exists by the time I next need some drives. Currently I'm using a large batch of 2s. Could cut it in half or double my storage with the next generation.atomt - Wednesday, June 12, 2013 - link
They recently removed TLER from their standard consumer drives, and are now selling it at a premium and giving the feature a fancy marketing name. Same as WD did some years ago. Likely the same drives for the most part, even same firmware, with just some config bits flipped.yay duopolies.
Dentons - Wednesday, June 12, 2013 - link
There are firmware updates available to flash TLER firmwares onto Seagate consumer drives.Of course, Seagate doesn't make it easy to determine which firmware is compatible with which drives. Nor does Seagate list these firmwares in any easy to find location.
More oddly, there doesn't seem to be a vibrant online community discussing these updates in any great detail. One has to Google drive model numbers and hunt around user forums.
My consumer drives aren't giving me any problems in the multiple FreeNAS systems I've built. I'm not going to fix something that's not broken. If I do start having problems with dropped drives, I'll definitely try the firmware updates.
zlandar - Wednesday, June 12, 2013 - link
The 4TB drives are interesting to me. I have four 2TB drives in RAID-5. In less than two years I have quickly run low on storage from all the movies/TV shows recorded to it. Might jump on these if they go on sale. The regular Seagate 4TB drives have been as low as $150.I have a mix of mostly WD and a few Seagate/Samsung drives. I have had two drives die on me which were WD. WD is great about replacing the drives. Both instances they gave me an upgraded version of the drive with a higher capacity under warranty.
Hrel - Thursday, June 13, 2013 - link
Remember when you guys reported on Seagate making a density breakthrough and talked about 6TB hard drives. That was at least a year ago, so what's up with that? I want to be able to buy 2 6TB hdd's for a total of $300. Looking like that's gonna take another 2 years at this point...thenew3 - Thursday, June 20, 2013 - link
Seagate drives are the most unreliable drives ever. I'm speaking from 10+ years of datacenter management experience. My current datacenter has over 800TB of storage ranging from 100GB SLC SSD through 4TB 7.2krpm NLSAS drives (basically SATA drives with a SAS interface on it). We have HD's from Hitachi, Toshiba, WD, Seagate, and Fujitsu. For some reason (maybe cost) our SAN vendors tend to use seagate drives the most. about 75% of all our drives are seagate. We typically see around a 5% failure rate among seagate drives, while all of the other brands combined is less than 0.5%.For example in one SAN with 16 Seagate 1TB NLSAS drives, over the last 1.5 years, 6 of those drives have failed. While in another SAN with 48 1TB NLSAS Hitachi drives that have been running for over 6 years, only 2 drives have failed in that 6+ year period. Same capacity, same speed drives, same operating environment, same work load, yet seagate has 6 out of 16 fail within 1.5 years while hitachi has 2 out of 48 fail within 6+ years.
That's just one example, but overall, seagate drives have a 10x or higher failure rate than other manufacturers based on my experience.
Our SAN vendor has recently switched to toshiba drives. So far out of the dozen or so toshiba drives, none have failed. the oldest one has been running 24x7 for a year now.