Intel 750 is better. The only thing that you can tell is random write 4K QD1-4. Also it's really bad when you don't have the consistency when you need it. There's nothing worse than a hanging application, it's about consistancy not outright speed. Which reminds me...
When evaluating graphics cards a MINIMUM frame rate is WAY more important than average or maximum.
Just like in racing the slowest speed in the corner is what separates great cars from average.
Hopefully Anandtech can recognize this in future reviews
Exactly. Intel 750 is still the king for someone who seriously needs storage performance. 4K randoms and zero throttling. I'd stick with the Evo or 600P, 3D TLC stuff unless I really needed the performance then I'd go all the way up to the real professional stuff with the 750. I need a 1TB M.2 NVME SSD myself and eager to see street prices on the 960 EVO 1TB and Intel 600P 1TB.
Exactly, when majority ( 90%+ ) of consumer usage is going to be based on QD1. Giving me QD32 numbers is like a Mpixel or Mhz race. I used to think we reached the limit of Random read write performance. Turns out we haven't actually improved Random Read Write QD1 much, hence it is likely still the bottleneck.
And yes we need consistency in QD1 Random Speed test as well.
Nice to see there are still some folks out there who arent duped by marketing, random write and full capacity consistency are the only 2 things a look at. When moving large video files around sequential speeds can help, but difference between 500 and 1000 mb/s isnt much, you start the copy then go do something else. In many cases random write is the bottleneck for the times you are waiting on the computer to "do something", and dictates if the computer feels "snappy". Likewise, performance loss when a drive is getting full also makes you 'notice' things are slowing down.
Samsung if you are reading this, go balls out random write performance on the next generation, tyvm.
Let me do a BIG WARNING against disabling write-buffer flushing. Any drive without special provisions for power loss (eg: supercapacitor), can really lose much data in the event of a unexpected power loss. In the worst scenario, entire filesystem loss can happen.
What the two Windows settings do? In short: 1) "enable write cache on the device" enables the controller's private DRAM writeback cache and it is *required* for good performance on SSD drives. The reason exactly the one cited on the article: for good performance, flash memory requires batched writes. For example, with DRAM cache disabled I recorded write speed of 5 MB/s on a otherwise fast Crucial M550 256 GB. With DRAM cache enabled, the very same disk almost saturated the SATA link (> 400 MB/s). However, a writeback cache imply some data loss risk. For that reason the IDE/SATA standard has some special commands to force a full cache flush when the OS need to be sure about data persistence. This bring us that second option...
2) "turn off write-cache buffer flushing on the device": this option should be absolutely NOT enabled on consumer, non-power-protected disks. With this option enabled, Windows will *not* force a full cache flush even on critical tasks (eg: update of NTFS metadata). This can have catastrophic consequence if power is loss at the wrong moment. I am not speaking about "simple", limited data loss, but entire filesystem corruption. The key reason for such a catastrophic behavior is that cache-flush command are not only used for store critical data, but for properly order their writeout also. In other words, with cache flushing disabled, key filesystem metadata can be written out of order. If power is lost during a incomplete, badly-reordered metadata writes, all sort of problems can happen. This option exists for one, and only one, case: when your system has a power-loss-protected array/drives, you trust your battery/capacitor AND your RAID card/drive behave poorly when flushing is enabled. However, basically all modern RAID controllers automatically ignores cache flushes when the battery/capacitor are healthy, negating the needing to disable cache flushes software-side.
In short, if such a device (960 Pro) really need disabled cache flushing to shine, this is a serious product/firmware flaw which need to be corrected as soon as possible.
Is power loss a problem for M.2 drives though? E.g. my PSU's (Corsair AX1200i) capacitors keeps the MB alive for probably 1 minute following power loss - plenty of time for the drive to flush any caches, no?
Does your mobo power the m.2 slot, or just the LEDs? Baring evidence to the contrary I'd assume it's only the latter that are getting power, and enough residual power to run a few LEDs for a minute would only give a few seconds for the 960 in its deepest power saving modes, or far less while doing writes.
How does your computer know to shut down in that event? Is there a signal to the operating system from the power supply to notify it that power has been lost and that it should shut down? Because if not, all that will happen is that 1 minute more of data will be written to the drive, only to be lost when the power abruptly cuts out when the capacitors lose their charge.
Obviously it doesn't matter if the PSU doesn't send a signal to the system, which it doesn't. It wouldn't matter even if you have an UPS that could last an hour if it can't signal the system to shut down or at least flush caches before power runs out completely.
I agree that what you've described is what those options *seem* to mean. But the semantics behind those checkboxes are clearly very different for NVMe drives and SATA drives, and it is an outright bug for Microsoft to apply the same description to both cases. The Samsung 960 Pro is also not the only drive to severely underperform without disabling write cache buffer flushing; the 950 Pro without Samsung's driver seems to be similar and I've seen this behavior on at least other vendor's NVMe controller. This is a serious concern that requires further investigation, but I'm not ready to lay the blame on the Samsung 960 Pro. If Microsoft's defaults for NVMe drives is the most reasonable behavior for consumer workloads (including the risk of power loss), then that would imply that most or all of the vendor-specific NVMe drivers are playing fast and loose with data safety, and possibly so are Microsoft's SATA/AHCI drivers.
"that would imply that most or all of the vendor-specific NVMe drivers are playing fast and loose with data safety, and possibly so are Microsoft's SATA/AHCI drivers"
This can be quite true, especially considering as some vendors publish "turbo-cached mode" that supposedly enhance disk write speed. By the way the storage controller drives is such a critical kernel component that I will try hard to stay with Microsoft own driver, unless extensive testing on vendor-specific drivers confirms their stability.
Wouldn't INT 0 (power loss) fire fast enough to execute flush command in time for decent PSU to handle that before running out of power? Most of "decent+" PSUs seem to have quite a power buffer in capacitors to survive that long... with 300k IOPS it should manage to save with a decent margin. Even my old Corsair TX manages to survive micro-outages without computer shutting down or crashing. Afaic ATX2.01 PSU is required to endure at least 17ms power outage without losing output power. With 330k IOPS at hand it should be enough to quick save.
This would be something guys at anandtech could test. It would also probably help to build back the sites reputation and output of interesting articles.
Create a script that does some file system operations, then pull the plug. Repeat 10 times for each drive, driver and settings and see what happens. Yeah a lot of work.
only intel SSDs that have super caps never lose data ,, Intel 320 and S3500 (some site tested it and only intel SSDs never corrupted some SSDs flat out failed the Crucial M4) http://lkcl.net/reports/ssd_analysis.html http://www.extremetech.com/computing/173887-ssd-st... normal SSDs that have small caps (not super caps) that say they have power loss protection that is only there to protect the page table from bee trashed not the data it self that is currently been written that still be loss
Especially since NAND hasn't magically gotten lots faster after the SATA->NVMe transition. If SATA is fast enough to saturate the underlying NAND+controller combo when they must actually write to disk, then NVMe simply looks unnecessarily expensive (if you look at writes only). Since the fast NVMe drives all have ram caches, it's hard to detect whether data is properly being written.
Perhaps windows is doing something odd here, but it's definitely fishy.
This is probably a stupid question because I've been changing that setting for years on SSDs without even thinking about it and you clearly know more about this than I do, but does the use of a drive in a laptop (eg battery-powered) or with a UPS for the system negate this risk anyway? That was always my impression, but it could very much be wrong.
Having a battery, laptops are inherently safer than desktop against power loss. However, a bad (or missing) battery and/or a failing cable/connector can expose the disks to the very same unprotected power-loss scenario.
I would love to see Anandtech do a deep dive into this very topic. It's important. I've heard that windows and other apps do excessive cache flushing when enabled and that's also a problem. I've also heard intel SSDs completely ignore the cache flush command and simply implement full power loss protection. Batching writes into ever larger pieces is a fact of SSD life and it needs to be done right.
Agreed. Last year I traced slow disk i/o on a new Surface Pro 4 with 256GB Toshiba XG3 NVMe to the write-cache buffer flushing, so I checked the box to turn it off. Then in July, another driver bug caused the Surface Pro 4 to frequently lock up and require a forced power off. Within a few weeks I had a corrupted Windows profile and system file issues that took several DISM runs to clean up. Don't know for sure if my problem resulted from the disabled buffer flushing, but I'm now hesitant to reenable the setting.
It would be good to understand what this setting does with respect to NVMe driver operation, and interesting to measure the impact / data loss when power loss does occur.
I think you are really exaggerating the problem. DRAM cache has been used in storage well before SSDs became mainstream. Yes, HDDs have DRAM cache too and it's used for the same purpose: to cache writes. I would argue that HDDs are even more vulnerable because data sits in the cache for a longer time due to the much higher latency of platter-based storage.
Because of that, all consumer friendly file systems have resilience against small data losses. In the end, only a few MB of user data is cached anyway, so it's not like we talk about a major data loss. It's small enough not to impact user experience, and the file system can recover itself in case there was metadata in the lost cache.
If this was a severe issue, there would have been a fix years ago. For client-grade products there is simply no need because 100% data protection and uptime are not needed.
The problem is not the cache, rather ignoring cache flushes requests. I know DRAM caches are used from decades, and when disks lied about flushing them (in the good old IDE days), catastrophic filesystem failure were much more common (see XFS or ZFS FAQs / mailing lists for some reference, or even SATA command specifications).
I'm not exaggerating anything: it is a real problem, greatly debated in the Linux community in the past. From https://lwn.net/Articles/283161/ "So the potential for corruption is always there; in fact, Chris Mason has a torture-test program which can make it happen fairly reliably. There can be no doubt that running without barriers is less safe than using them"
This quote is ext3-specific, but other journaled filesystem behave in very similar manners. And hey - the very same Windows check box warns you about the risks related to disabling flushes.
You should really inquiry Microsoft about what these check box do on its NVMe driver. Anyway, suggesting to disable cache flushes is a bad advise (unless you don't use your PC for important things).
I don't think people understand how cache flushing works at the hardware level.
If the operating system has buffer flushing disabled, it will never tell the drive to dump the cache, for example, when an operation is complete. In this event, a drive will hold onto whatever data is in cache until the cache fills up, then the drive firmware will trigger the controller to write the cache to disk.
Since OS's randomly write data to disk all the time, bits here and there go into cache to prevent disk thrashing/NAND wear, all determined in hardware. This has nothing to do with pooled or paged data at the OS level or RAM data buffers.
Long story short, it's moronic to disable write buffer flushing, where the OS will command the drive after IO operations (like a file copy or write) complete, ensuring the cache is clear as the system enters idle. This happens hundreds if not thousands of times per minute and its important to fundamentally protect the data in cache. With buffer flushing disabled the cache will ALWAYS have something in it until you shutdown - which is the only time (other than suspend) a buffer flush command will be sent.
"With buffer flushing disabled the cache will ALWAYS have something in it until you shutdown - which is the only time (other than suspend) a buffer flush command will be sent."
I expect at least some drives flush their internal caches before entering any power saving mode. I've occasionally seen the power meter spike before a drive actually drops down to its idle power level, and I probably would have seen a lot more such spikes if the meter were sampling more than once per second.
"Because of that, all consumer friendly file systems have resilience against small data losses."
And for those to work, cache flush requests need to be functional for the journalling to work correctly. Disabling cache flushing will reintroduce the serious corruption issues.
"100% data protection is not needed": at some level that's obviously true. But it's nice to have *some* guarantees so you know which risks you need to mitigate and which you can ignore.
Also, NVMe has the potential to make this problem much worse: it's plausible that the underlying NAND+controller cannot outperform SATA alternatives to the degree they appear to; and that to achieve that (marketable) advantage, they need to rely more on buffering and write merging. If so, then it's possible you may be losing still only milliseconds of data, but that might cause quite a lot of corruption given how much data that can be on NVMe. Even though "100%" safe is possibly unnecessary, that would make the NVMe value proposition much worse: not only are such drives much more expensive, they also (in this hypothesis) would be more likely to cause data corruption - I certainly wouldn't buy one given that tradeoff; the performance gains are simply too slim (in almost any normal workload).
Also, it's not quite true that "all consumer friendly file systems have resilience against small data losses". Journalled filesystems typically only journal metadata; not data - so you may still have a bunch of corrupted files. And, critically - the journaling algorithms rely on proper drive flushing! If a drive can lose data that has been flushed (pre-fsync-writes), then even a journalled filesystem can (easily!) be corrupted extensively. If anything, journalled filesystems are even more vulnerable to that than plain old fat, because they rely on clever interactions of multiple (conflicting) sources of truth in the event of a crash, and when the assumptions the FS makes turn out to be invalid, it (by design) will draw incorrect inferences about which data is "real" and which due to the crash. You can easily lose whole directories (say, user directories) at once like this.
Tbh I consider whole this argument strongly obsolete... if you have close to $1300 spare to buy 2TB SSD monster, you definitely should have $250-350ish to buy decent UPS.
Or, if you run several thousand USD machine without any, you more than deserve what you can get.
It's same argument like you won't build double Titan XP monster and power it with chinesse noname PSU. There are things which are simply no go.
As an ex-IT who used to manage thousands of computers, I have never seen catastrophic data loss caused by a power outage, and I have seen many of them. What I have seen are harddrives or PSUs dying and recently committed data was lost, but never fully committed data.
That being said. SSDs are a special beast because many times writing new data requires moving existing data, and this is dangerous.
Most modern filesystems since the 90s, except FAT32, were meant to handle unexpected powerloss. NTFS was the first FS from MS that pretty much got rid of powerloss issues.
The functionality that a file system like NTFS requires to avoid corruption in the case of a power failure is a write barrier. A write barrier is a directive that says that the storage device should perform all writes prior to the write barrier before performing any of the writes issued after the write barrier.
On a device using flash memory, write barriers should have minimal performance impact. It is not possible to overwrite flash memory, so when an SSD gets a write request, it will allocate a new page (or multiple pages) of flash memory to hold the data begin written. After it writes the data, it will update the mapping table so to point to the newly written page(s). If an SSD gets a whole bunch of writes, it can perform the data write operations in parallel as long as the pages being written all reside on different flash chips.
If an SSD gets a bunch of writes separated by write barriers, it can write the data to flash just like it would without the write barriers. The only change is in when a write completes, the SSD cannot update the mapping table to point to the new data until earlier writes have completed.
This is different from a mechanical hard drive. If you issue a bunch of writes to a mechanical hard drive, the drive will attempt to perform the writes in an order that will minimize seek time and rotational latency. If you place write barriers between the write requests, then the drive will execute the writes in the same order you issued them, resulting in lower throughput.
Now suppose you are unable to use write barriers for some reason. You can achieve the same effect by issuing commands to flush the disk after every write, but that will prevent the device from executing mulitple write commands in parallel. A mechanical hard drive can only execute one write at a time, so cache flushes are a viable alternative to write barriers if you know you are using a mechanical hard drive. But on SSD's, parallel writes are not only possible, they are essential to performance. The write speeds of individual flash chips are slower than hard drive write speeds; the reason that sequential writes on most SSD's are faster than on a hard drive is that the SSD writes to multiple chips in parallel. So if you are talking to an SSD, you do not want to use cache flushes to get the effect of write barriers.
I take it from what shodanshok wrote is that Microsoft Windows doesn't use write barriers on NVME devices, giving you the choice of either using cache flushes or risking file system corruption on loss of power. A quick look at the NVME specification suggests that this is the fault of Intel, not Microsoft. Unless I've missed it, Intel inexplicably omitted write barrier functionality from the specification, forcing Microsoft to use cache flushing as a work-around:
On SSD devices, write barriers are essentially free. There is no need for a separate write barrier command; the write command could contain a field indicating that the write operation should be preceded by a write barrier. Users shouldn't have to chose between data protection and performance when the correct use of a sensibly designed protocol would given them both without them having to worry about it.
I'm still on Windows 8.1 because this is still our 2015 SSD testbed and benchmark suite. I am planning to switch to Windows 10 soon, but that will mean that new benchmark results are not directly comparable to our current catalog of results, so I'll have to re-test all the drives I still have on hand, and I'll probably take the opportunity to make a few other adjustments to the test protocol.
Switching to Windows 10 hasn't been a priority because of the hassle it entails and the fact that it's something of a moving target, but particularly with the direction the NVMe market is headed the Windows version is starting to become an important factor.
the problem with windows 10 when using as a benchmark system is you got to make sure automatic maintenance is disabled and windows update is disabled or it mess the results up (i have 2 laptops and both of them go nuts when screen turns off on win10{fan revved up and lots of SSD activity)
i would personally stick with windows 7 or 8 as they are more predictable
if using windows 8 and 10 you need to disable the idle maintenance auto task (set windows update to never check) and windows 10 you have to disable the windows update service as it can mess up benchmark results (or if using windows 10 pro use GPedit to set windows update to ask before downloading, note pressing check or download actually means download and install on windows 10 pro)
If I replace my Vertex 3 120Gb Sata3 SSD with this one and use my PC for normal tasks like web browsing and gaming, will I notice any difference? Thats the real question to me.
The biggest one will be being able to have all yours games on SSD instead of just 1 or 2. Even a cheap SSD is fast enough that IO rarely is a major bottleneck in day to day consumer use.
For the money you will spend, you will not notice a significant difference. If the rest of your system is of the same vintage as the SSD you're replacing, that will be even more true.
Thanks. I fixed the typo, but left the two cells separate and split the PCIe interface so that there's an uninterrupted vertical line separating the old drives from the new.
Once the 512GB 960 Pro is widely available and once Samsung delivers the drivers for it, there should be no reason to get the 512GB 950 Pro. I do hope to confirm that directly by testing a 512GB 960 Pro against the 950 Pro, but sample supplies have been pretty limited for this launch. The 256GB 950 Pro won't have a direct successor, but if the 960 EVO does what it's supposed to it should offer better real-world performance at a much lower price.
I'd say price would be a big one. If you can get the 950 pro for $100 less then the 960 pro of the same size, unless you need all that speed the 950 pro would be a better deal.
How was the 960 Pro connected during the test? On the Asus Z97 mobos M.2 connector that shares bandwidth Sata Express #1?
If so, is it recommended to unplug any other Sata drive from this Sata port #1 and use a separate Sata port for that device (for not loosing performance under heavy workload where multiple SSD-drives are in use?
Or was the 960 Pro connected to a PCIE 3.0 via adapter? Please explain this and the possible benefits for one or the other, consider a hefty game GPU connected to PCIE 3.0 x16 slot on a similar mobo (Asus Z97 Deluxe). Sincerley from Sweden
The SSD testbed doesn't have a discrete GPU, so all PCIe SSDs are tested in the PCIe 3.0 x16 slot. There's a riser card with the power measurement circuitry between the SSD and the motherboard. M.2 PCIe SSDs are tested in a simple passive PCIe x4 to M.2 adapter card which is plugged in to the power measurement riser card. I'll also be testing the 960 Pro with the Angelbird Wings PX1 adapter and heatsink as I dig deeper into its thermal performance.
Cant wait to see that, as it seems the 960 pro is thermally limited more often then not, especially on write tests. Hope to see even bigger improvements.
Waiting for a Polaris update to the Radeon Pro SSG. Throw some 960s (Polaris controllers) in to replace the 950s and things will get really confusing. ;')
No, unless what you're doing involves reading or writing many gigabytes of data at a time in which case it'll be noticeably faster. Otherwise, the experience will be very similar compared to old SATA SSDs.
This is kind of a chicken-and-the-egg problem, but has Samsung said anything about releasing these as U.2? Quite a few new motherboards have U.2 ports now, and putting these drives in the larger 2.5 inch form factor would make it possible to solve the overheating issues with heatsinks.
You wouldn't think so, but I had a hell of a time finding one. All said and done, only one manufacturer seems to make an adapter to turn a M.2 into a U.2. Some company called microsatacables.com http://www.microsatacables.com/u2-sff8639-to-m2-pc...
The Intel 600p is very high on my to-do list. Since it's also using the Microsoft NVMe driver I want to run it through some more tests before publishing the review, but it should be done before I get the 960 EVO.
Curious about the operating temperatures of the drive, Im guessing that Destroyer gave it a good thermal workout :) Puget Systems have done some testing on the effect of adding cooling to M2 SSDs: https://www.pugetsystems.com/labs/articles/Samsung... Aquacomputer also make a PCIe Riser card equipped waterblock for M2 drives for those who want to water cool their SSD, the kryoM.2
Does anyone know if moving an M2 drive to a riser card improves cooling for the drive? I would think there is a little bit of improvement bringing the drive off the surface of the motherboard... How if at all is everyone keeping their motherboard mounted SSDs cool?
Yes, moving an M.2 drive to a riser card improves cooling (at least in my system). I checked temperatures mounted on my motherboard with a spot cooler directly on the 950Pro. I then compared it against mounting on a Silverstone ECM20 and an Asus Hyper M.2 X4 Mini (no spot cooler). If you are able to direct good airflow to the riser card, as I was, the Asus riser card is significantly cooler even though it had no spot cooler. If you use a thermal pad on the back of the M.2 drive on the Silverstone EMC20, it may in fact do even better than the Asus, but it my system, the larger standoffs allowed more air to flow behind the M.2 drive on the Asus solution. In any case, I can't seem to induce thermal throttling on my 950Pro in my setup regardless of how hard I try. Again, it will depend on how much airflow you can deliver it.
Those SSDs suffering from HEAT. If you don't put a forced cooling on them, it will DEGRADE the speed no matter what! This is why you won't see much than 3GB/s !
Where are these performance heading. Where Do we need to go. The destroyers and heavy definitely dont represent 95% of the consumer users usage. Do we need more higher Random Q1 Read Write? Or do we need higher Seq Read Write? If so then we dont need our firmware and powerful CPU core for QD32. Will that save cost? Why are we still within the 5W power usage? When can we get those dropped down to 2W or less.
@ Billy Talis: Would it be possible to test on another mainboard? There seems to be a clear bandwidth issue for sequential transfers, as other tech sites as https://www.computerbase.de/2016-10/samsung-ssd-96... achieved almost 3400 MB/s in CrystalDiskMark for the 512 GB model. At first they struggled to reach more than 3100 MB/s. (Samsung values were achieved with CrystalDiskMark according to computerbase).
Great table in the article summarizing the different Samsung NAND technologies. Here's a summary of the different types of NAND and which products they were in.
27nm MLC 830 PM830 21nm MLC 840 pro 21nm TLC 840 19nm MLC XP941 19nm TLC 840 Evo 16nm MLC SM951 16nm TLC 750 Evo PM951 32 layer 86bit v-NAND MLC 850 Pro 32 layer 128bit v-NAND TLC 850 Evo 32 layer 128bit v-NAND MLC 950 Pro 48 layer 128bit v-NAND MLC 960 Pro SM961 48 layer 128bit v-NAND TLC 960 Evo PM961
The Samsung 2.1 driver has DPC latency issues on my end. I am usign a Samsung 960 500GB Evo nvme and using it with the Native Microsoft Driver (Windows 10 with latest updates installed) or the Samsung 2.0 driver, all is ok, DPC latency reports are fine - Latencymon and latency checker both report trouble free operation. However, installing the 2.1 driver, both Latencymon and dpc latency checker report problems with storport.sys and problems with DPC. Rolling back the driver from the Samsung 2.1 to the Microsoft or the Samsung 2.0, all goes back to normal
"...a more thorough comparison of how NVMe drivers and operating system versions affect performance will be coming in the future."
Has this been published? I'm very interested in just such an analysis. I recently obtained an Intel 750 400GB card and want to know all about the ideal driver setup under Windows 10 and possibly 7.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
72 Comments
Back to Article
JoeyJoJo123 - Tuesday, October 18, 2016 - link
Not too surprised that Samsung, once again, achieves another performance crown for another halo SSD product.Eden-K121D - Tuesday, October 18, 2016 - link
Bring on the competitionibudic1 - Tuesday, October 18, 2016 - link
Intel 750 is better. The only thing that you can tell is random write 4K QD1-4. Also it's really bad when you don't have the consistency when you need it. There's nothing worse than a hanging application, it's about consistancy not outright speed. Which reminds me...When evaluating graphics cards a MINIMUM frame rate is WAY more important than average or maximum.
Just like in racing the slowest speed in the corner is what separates great cars from average.
Hopefully Anandtech can recognize this in future reviews
Flying Aardvark - Wednesday, October 19, 2016 - link
Exactly. Intel 750 is still the king for someone who seriously needs storage performance. 4K randoms and zero throttling.I'd stick with the Evo or 600P, 3D TLC stuff unless I really needed the performance then I'd go all the way up to the real professional stuff with the 750. I need a 1TB M.2 NVME SSD myself and eager to see street prices on the 960 EVO 1TB and Intel 600P 1TB.
iwod - Wednesday, October 19, 2016 - link
Exactly, when majority ( 90%+ ) of consumer usage is going to be based on QD1. Giving me QD32 numbers is like a Mpixel or Mhz race. I used to think we reached the limit of Random read write performance. Turns out we haven't actually improved Random Read Write QD1 much, hence it is likely still the bottleneck.And yes we need consistency in QD1 Random Speed test as well.
dsumanik - Wednesday, October 19, 2016 - link
Nice to see there are still some folks out there who arent duped by marketing, random write and full capacity consistency are the only 2 things a look at. When moving large video files around sequential speeds can help, but difference between 500 and 1000 mb/s isnt much, you start the copy then go do something else. In many cases random write is the bottleneck for the times you are waiting on the computer to "do something", and dictates if the computer feels "snappy". Likewise, performance loss when a drive is getting full also makes you 'notice' things are slowing down.Samsung if you are reading this, go balls out random write performance on the next generation, tyvm.
Samus - Wednesday, October 19, 2016 - link
You can't put an Intel 750 in a laptop though, and it also caps at 1.2TB. But your point is correct, it is a performance monster.edward1987 - Friday, October 28, 2016 - link
Intel SSD 750 SSDPEDMW400G4X1 PCI-Express-v3-x4 - HHHLAND Samsung SSD 960 PRO MZ-V6P512BW M.2 2280 NVMe
IOPS 230-430K VS 330K
ead speed (Max) 2200 VS 3500
Much better in comparison http://www.span.com/compare/SSDPEDMW400G4X1-vs-MZ-...
shodanshok - Tuesday, October 18, 2016 - link
Let me do a BIG WARNING against disabling write-buffer flushing. Any drive without special provisions for power loss (eg: supercapacitor), can really lose much data in the event of a unexpected power loss. In the worst scenario, entire filesystem loss can happen.What the two Windows settings do? In short:
1) "enable write cache on the device" enables the controller's private DRAM writeback cache and it is *required* for good performance on SSD drives. The reason exactly the one cited on the article: for good performance, flash memory requires batched writes. For example, with DRAM cache disabled I recorded write speed of 5 MB/s on a otherwise fast Crucial M550 256 GB. With DRAM cache enabled, the very same disk almost saturated the SATA link (> 400 MB/s).
However, a writeback cache imply some data loss risk. For that reason the IDE/SATA standard has some special commands to force a full cache flush when the OS need to be sure about data persistence. This bring us that second option...
2) "turn off write-cache buffer flushing on the device": this option should be absolutely NOT enabled on consumer, non-power-protected disks. With this option enabled, Windows will *not* force a full cache flush even on critical tasks (eg: update of NTFS metadata). This can have catastrophic consequence if power is loss at the wrong moment. I am not speaking about "simple", limited data loss, but entire filesystem corruption. The key reason for such a catastrophic behavior is that cache-flush command are not only used for store critical data, but for properly order their writeout also. In other words, with cache flushing disabled, key filesystem metadata can be written out of order. If power is lost during a incomplete, badly-reordered metadata writes, all sort of problems can happen.
This option exists for one, and only one, case: when your system has a power-loss-protected array/drives, you trust your battery/capacitor AND your RAID card/drive behave poorly when flushing is enabled. However, basically all modern RAID controllers automatically ignores cache flushes when the battery/capacitor are healthy, negating the needing to disable cache flushes software-side.
In short, if such a device (960 Pro) really need disabled cache flushing to shine, this is a serious product/firmware flaw which need to be corrected as soon as possible.
Br3ach - Tuesday, October 18, 2016 - link
Is power loss a problem for M.2 drives though? E.g. my PSU's (Corsair AX1200i) capacitors keeps the MB alive for probably 1 minute following power loss - plenty of time for the drive to flush any caches, no?DanNeely - Tuesday, October 18, 2016 - link
Does your mobo power the m.2 slot, or just the LEDs? Baring evidence to the contrary I'd assume it's only the latter that are getting power, and enough residual power to run a few LEDs for a minute would only give a few seconds for the 960 in its deepest power saving modes, or far less while doing writes.bji - Tuesday, October 18, 2016 - link
How does your computer know to shut down in that event? Is there a signal to the operating system from the power supply to notify it that power has been lost and that it should shut down? Because if not, all that will happen is that 1 minute more of data will be written to the drive, only to be lost when the power abruptly cuts out when the capacitors lose their charge.ddriver - Tuesday, October 18, 2016 - link
Obviously it doesn't matter if the PSU doesn't send a signal to the system, which it doesn't. It wouldn't matter even if you have an UPS that could last an hour if it can't signal the system to shut down or at least flush caches before power runs out completely.noeldillabough - Tuesday, October 18, 2016 - link
I was thinking the exact same thing ACK no battery/capacitors? I'd never turn off buffer flushing.Billy Tallis - Tuesday, October 18, 2016 - link
I agree that what you've described is what those options *seem* to mean. But the semantics behind those checkboxes are clearly very different for NVMe drives and SATA drives, and it is an outright bug for Microsoft to apply the same description to both cases. The Samsung 960 Pro is also not the only drive to severely underperform without disabling write cache buffer flushing; the 950 Pro without Samsung's driver seems to be similar and I've seen this behavior on at least other vendor's NVMe controller. This is a serious concern that requires further investigation, but I'm not ready to lay the blame on the Samsung 960 Pro. If Microsoft's defaults for NVMe drives is the most reasonable behavior for consumer workloads (including the risk of power loss), then that would imply that most or all of the vendor-specific NVMe drivers are playing fast and loose with data safety, and possibly so are Microsoft's SATA/AHCI drivers.shodanshok - Tuesday, October 18, 2016 - link
"that would imply that most or all of the vendor-specific NVMe drivers are playing fast and loose with data safety, and possibly so are Microsoft's SATA/AHCI drivers"This can be quite true, especially considering as some vendors publish "turbo-cached mode" that supposedly enhance disk write speed. By the way the storage controller drives is such a critical kernel component that I will try hard to stay with Microsoft own driver, unless extensive testing on vendor-specific drivers confirms their stability.
HollyDOL - Tuesday, October 18, 2016 - link
Wouldn't INT 0 (power loss) fire fast enough to execute flush command in time for decent PSU to handle that before running out of power? Most of "decent+" PSUs seem to have quite a power buffer in capacitors to survive that long... with 300k IOPS it should manage to save with a decent margin.Even my old Corsair TX manages to survive micro-outages without computer shutting down or crashing. Afaic ATX2.01 PSU is required to endure at least 17ms power outage without losing output power. With 330k IOPS at hand it should be enough to quick save.
Not that I'd be all out to go and try :-)
beginner99 - Wednesday, October 19, 2016 - link
This would be something guys at anandtech could test. It would also probably help to build back the sites reputation and output of interesting articles.Create a script that does some file system operations, then pull the plug. Repeat 10 times for each drive, driver and settings and see what happens. Yeah a lot of work.
leexgx - Tuesday, October 25, 2016 - link
only intel SSDs that have super caps never lose data ,, Intel 320 and S3500 (some site tested it and only intel SSDs never corrupted some SSDs flat out failed the Crucial M4)http://lkcl.net/reports/ssd_analysis.html
http://www.extremetech.com/computing/173887-ssd-st...
normal SSDs that have small caps (not super caps) that say they have power loss protection that is only there to protect the page table from bee trashed not the data it self that is currently been written that still be loss
Gigaplex - Tuesday, October 18, 2016 - link
"then that would imply that most or all of the vendor-specific NVMe drivers are playing fast and loose with data safety"I would not be surprised if that's exactly what they're doing.
emn13 - Wednesday, October 19, 2016 - link
Especially since NAND hasn't magically gotten lots faster after the SATA->NVMe transition. If SATA is fast enough to saturate the underlying NAND+controller combo when they must actually write to disk, then NVMe simply looks unnecessarily expensive (if you look at writes only). Since the fast NVMe drives all have ram caches, it's hard to detect whether data is properly being written.Perhaps windows is doing something odd here, but it's definitely fishy.
jhoff80 - Tuesday, October 18, 2016 - link
This is probably a stupid question because I've been changing that setting for years on SSDs without even thinking about it and you clearly know more about this than I do, but does the use of a drive in a laptop (eg battery-powered) or with a UPS for the system negate this risk anyway? That was always my impression, but it could very much be wrong.shodanshok - Tuesday, October 18, 2016 - link
Having a battery, laptops are inherently safer than desktop against power loss. However, a bad (or missing) battery and/or a failing cable/connector can expose the disks to the very same unprotected power-loss scenario.Dr. Krunk - Sunday, October 23, 2016 - link
What happens if accidently press the battery release button and it pops out just enough to lose connection?woggs - Tuesday, October 18, 2016 - link
I would love to see Anandtech do a deep dive into this very topic. It's important. I've heard that windows and other apps do excessive cache flushing when enabled and that's also a problem. I've also heard intel SSDs completely ignore the cache flush command and simply implement full power loss protection. Batching writes into ever larger pieces is a fact of SSD life and it needs to be done right.voicequal - Tuesday, October 18, 2016 - link
Agreed. Last year I traced slow disk i/o on a new Surface Pro 4 with 256GB Toshiba XG3 NVMe to the write-cache buffer flushing, so I checked the box to turn it off. Then in July, another driver bug caused the Surface Pro 4 to frequently lock up and require a forced power off. Within a few weeks I had a corrupted Windows profile and system file issues that took several DISM runs to clean up. Don't know for sure if my problem resulted from the disabled buffer flushing, but I'm now hesitant to reenable the setting.It would be good to understand what this setting does with respect to NVMe driver operation, and interesting to measure the impact / data loss when power loss does occur.
Kristian Vättö - Tuesday, October 18, 2016 - link
I think you are really exaggerating the problem. DRAM cache has been used in storage well before SSDs became mainstream. Yes, HDDs have DRAM cache too and it's used for the same purpose: to cache writes. I would argue that HDDs are even more vulnerable because data sits in the cache for a longer time due to the much higher latency of platter-based storage.Because of that, all consumer friendly file systems have resilience against small data losses. In the end, only a few MB of user data is cached anyway, so it's not like we talk about a major data loss. It's small enough not to impact user experience, and the file system can recover itself in case there was metadata in the lost cache.
If this was a severe issue, there would have been a fix years ago. For client-grade products there is simply no need because 100% data protection and uptime are not needed.
shodanshok - Tuesday, October 18, 2016 - link
The problem is not the cache, rather ignoring cache flushes requests. I know DRAM caches are used from decades, and when disks lied about flushing them (in the good old IDE days), catastrophic filesystem failure were much more common (see XFS or ZFS FAQs / mailing lists for some reference, or even SATA command specifications).I'm not exaggerating anything: it is a real problem, greatly debated in the Linux community in the past. From https://lwn.net/Articles/283161/
"So the potential for corruption is always there; in fact, Chris Mason has a torture-test program which can make it happen fairly reliably. There can be no doubt that running without barriers is less safe than using them"
This quote is ext3-specific, but other journaled filesystem behave in very similar manners. And hey - the very same Windows check box warns you about the risks related to disabling flushes.
You should really inquiry Microsoft about what these check box do on its NVMe driver. Anyway, suggesting to disable cache flushes is a bad advise (unless you don't use your PC for important things).
Samus - Wednesday, October 19, 2016 - link
I don't think people understand how cache flushing works at the hardware level.If the operating system has buffer flushing disabled, it will never tell the drive to dump the cache, for example, when an operation is complete. In this event, a drive will hold onto whatever data is in cache until the cache fills up, then the drive firmware will trigger the controller to write the cache to disk.
Since OS's randomly write data to disk all the time, bits here and there go into cache to prevent disk thrashing/NAND wear, all determined in hardware. This has nothing to do with pooled or paged data at the OS level or RAM data buffers.
Long story short, it's moronic to disable write buffer flushing, where the OS will command the drive after IO operations (like a file copy or write) complete, ensuring the cache is clear as the system enters idle. This happens hundreds if not thousands of times per minute and its important to fundamentally protect the data in cache. With buffer flushing disabled the cache will ALWAYS have something in it until you shutdown - which is the only time (other than suspend) a buffer flush command will be sent.
Billy Tallis - Wednesday, October 19, 2016 - link
"With buffer flushing disabled the cache will ALWAYS have something in it until you shutdown - which is the only time (other than suspend) a buffer flush command will be sent."I expect at least some drives flush their internal caches before entering any power saving mode. I've occasionally seen the power meter spike before a drive actually drops down to its idle power level, and I probably would have seen a lot more such spikes if the meter were sampling more than once per second.
Gigaplex - Tuesday, October 18, 2016 - link
"Because of that, all consumer friendly file systems have resilience against small data losses."And for those to work, cache flush requests need to be functional for the journalling to work correctly. Disabling cache flushing will reintroduce the serious corruption issues.
emn13 - Wednesday, October 19, 2016 - link
"100% data protection is not needed": at some level that's obviously true. But it's nice to have *some* guarantees so you know which risks you need to mitigate and which you can ignore.Also, NVMe has the potential to make this problem much worse: it's plausible that the underlying NAND+controller cannot outperform SATA alternatives to the degree they appear to; and that to achieve that (marketable) advantage, they need to rely more on buffering and write merging. If so, then it's possible you may be losing still only milliseconds of data, but that might cause quite a lot of corruption given how much data that can be on NVMe. Even though "100%" safe is possibly unnecessary, that would make the NVMe value proposition much worse: not only are such drives much more expensive, they also (in this hypothesis) would be more likely to cause data corruption - I certainly wouldn't buy one given that tradeoff; the performance gains are simply too slim (in almost any normal workload).
Also, it's not quite true that "all consumer friendly file systems have resilience against small data losses". Journalled filesystems typically only journal metadata; not data - so you may still have a bunch of corrupted files. And, critically - the journaling algorithms rely on proper drive flushing! If a drive can lose data that has been flushed (pre-fsync-writes), then even a journalled filesystem can (easily!) be corrupted extensively. If anything, journalled filesystems are even more vulnerable to that than plain old fat, because they rely on clever interactions of multiple (conflicting) sources of truth in the event of a crash, and when the assumptions the FS makes turn out to be invalid, it (by design) will draw incorrect inferences about which data is "real" and which due to the crash. You can easily lose whole directories (say, user directories) at once like this.
HollyDOL - Wednesday, October 19, 2016 - link
Tbh I consider whole this argument strongly obsolete... if you have close to $1300 spare to buy 2TB SSD monster, you definitely should have $250-350ish to buy decent UPS.Or, if you run several thousand USD machine without any, you more than deserve what you can get.
It's same argument like you won't build double Titan XP monster and power it with chinesse noname PSU. There are things which are simply no go.
bcronce - Tuesday, October 18, 2016 - link
As an ex-IT who used to manage thousands of computers, I have never seen catastrophic data loss caused by a power outage, and I have seen many of them. What I have seen are harddrives or PSUs dying and recently committed data was lost, but never fully committed data.That being said. SSDs are a special beast because many times writing new data requires moving existing data, and this is dangerous.
Most modern filesystems since the 90s, except FAT32, were meant to handle unexpected powerloss. NTFS was the first FS from MS that pretty much got rid of powerloss issues.
KAlmquist - Tuesday, October 18, 2016 - link
The functionality that a file system like NTFS requires to avoid corruption in the case of a power failure is a write barrier. A write barrier is a directive that says that the storage device should perform all writes prior to the write barrier before performing any of the writes issued after the write barrier.On a device using flash memory, write barriers should have minimal performance impact. It is not possible to overwrite flash memory, so when an SSD gets a write request, it will allocate a new page (or multiple pages) of flash memory to hold the data begin written. After it writes the data, it will update the mapping table so to point to the newly written page(s). If an SSD gets a whole bunch of writes, it can perform the data write operations in parallel as long as the pages being written all reside on different flash chips.
If an SSD gets a bunch of writes separated by write barriers, it can write the data to flash just like it would without the write barriers. The only change is in when a write completes, the SSD cannot update the mapping table to point to the new data until earlier writes have completed.
This is different from a mechanical hard drive. If you issue a bunch of writes to a mechanical hard drive, the drive will attempt to perform the writes in an order that will minimize seek time and rotational latency. If you place write barriers between the write requests, then the drive will execute the writes in the same order you issued them, resulting in lower throughput.
Now suppose you are unable to use write barriers for some reason. You can achieve the same effect by issuing commands to flush the disk after every write, but that will prevent the device from executing mulitple write commands in parallel. A mechanical hard drive can only execute one write at a time, so cache flushes are a viable alternative to write barriers if you know you are using a mechanical hard drive. But on SSD's, parallel writes are not only possible, they are essential to performance. The write speeds of individual flash chips are slower than hard drive write speeds; the reason that sequential writes on most SSD's are faster than on a hard drive is that the SSD writes to multiple chips in parallel. So if you are talking to an SSD, you do not want to use cache flushes to get the effect of write barriers.
I take it from what shodanshok wrote is that Microsoft Windows doesn't use write barriers on NVME devices, giving you the choice of either using cache flushes or risking file system corruption on loss of power. A quick look at the NVME specification suggests that this is the fault of Intel, not Microsoft. Unless I've missed it, Intel inexplicably omitted write barrier functionality from the specification, forcing Microsoft to use cache flushing as a work-around:
http://www.nvmexpress.org/wp-content/uploads/NVM_E...
On SSD devices, write barriers are essentially free. There is no need for a separate write barrier command; the write command could contain a field indicating that the write operation should be preceded by a write barrier. Users shouldn't have to chose between data protection and performance when the correct use of a sensibly designed protocol would given them both without them having to worry about it.
Dorkaman - Monday, November 28, 2016 - link
So this drive has capacitors to help write out anything in the buffer if the power goes out:https://youtu.be/nwCzcFvmbX0 skip to 2:00
23 power-loss capacitors used to keep the SSD's controller running just long enough, in the event of an outage, to flush all pending writes:
http://www.tomshardware.com/reviews/samsung-845dc-...
Will the 960 Evo have that? Would this prevent something like this (RAID 0 lost due to power outage):
https://youtu.be/-Qddrz1o9AQ
Nitas - Tuesday, October 18, 2016 - link
This may be silly of me but why did they use W8.1 instead of 10?Billy Tallis - Tuesday, October 18, 2016 - link
I'm still on Windows 8.1 because this is still our 2015 SSD testbed and benchmark suite. I am planning to switch to Windows 10 soon, but that will mean that new benchmark results are not directly comparable to our current catalog of results, so I'll have to re-test all the drives I still have on hand, and I'll probably take the opportunity to make a few other adjustments to the test protocol.Switching to Windows 10 hasn't been a priority because of the hassle it entails and the fact that it's something of a moving target, but particularly with the direction the NVMe market is headed the Windows version is starting to become an important factor.
Nitas - Tuesday, October 18, 2016 - link
I see, thanks for clearing that up!Samus - Wednesday, October 19, 2016 - link
Windows 8.1 will have virtually no difference in performance compared to Windows 10 for the purpose of benchmarking SSD's...leexgx - Tuesday, October 25, 2016 - link
the problem with windows 10 when using as a benchmark system is you got to make sure automatic maintenance is disabled and windows update is disabled or it mess the results up (i have 2 laptops and both of them go nuts when screen turns off on win10{fan revved up and lots of SSD activity)i would personally stick with windows 7 or 8 as they are more predictable
if using windows 8 and 10 you need to disable the idle maintenance auto task (set windows update to never check) and windows 10 you have to disable the windows update service as it can mess up benchmark results (or if using windows 10 pro use GPedit to set windows update to ask before downloading, note pressing check or download actually means download and install on windows 10 pro)
Badelhas - Tuesday, October 18, 2016 - link
If I replace my Vertex 3 120Gb Sata3 SSD with this one and use my PC for normal tasks like web browsing and gaming, will I notice any difference? Thats the real question to me.Cheers
DanNeely - Tuesday, October 18, 2016 - link
The biggest one will be being able to have all yours games on SSD instead of just 1 or 2. Even a cheap SSD is fast enough that IO rarely is a major bottleneck in day to day consumer use.phobos512 - Tuesday, October 18, 2016 - link
For the money you will spend, you will not notice a significant difference. If the rest of your system is of the same vintage as the SSD you're replacing, that will be even more true.phobos512 - Tuesday, October 18, 2016 - link
And here's the evidence.https://cdn.arstechnica.net/wp-content/uploads/sit...
https://cdn.arstechnica.net/wp-content/uploads/sit...
https://cdn.arstechnica.net/wp-content/uploads/sit...
Amoro - Tuesday, October 18, 2016 - link
There's a typo in the form factor for 960 drives, "Sngle-sided". Also, if the form factor is the same for both drives shouldn't the cell be merged?Does this make the 950 Pro obsolete at this point too? At least for the 512GB version.
Billy Tallis - Tuesday, October 18, 2016 - link
Thanks. I fixed the typo, but left the two cells separate and split the PCIe interface so that there's an uninterrupted vertical line separating the old drives from the new.Once the 512GB 960 Pro is widely available and once Samsung delivers the drivers for it, there should be no reason to get the 512GB 950 Pro. I do hope to confirm that directly by testing a 512GB 960 Pro against the 950 Pro, but sample supplies have been pretty limited for this launch. The 256GB 950 Pro won't have a direct successor, but if the 960 EVO does what it's supposed to it should offer better real-world performance at a much lower price.
TheinsanegamerN - Tuesday, October 18, 2016 - link
I'd say price would be a big one. If you can get the 950 pro for $100 less then the 960 pro of the same size, unless you need all that speed the 950 pro would be a better deal.Swede(n) - Tuesday, October 18, 2016 - link
How was the 960 Pro connected during the test?On the Asus Z97 mobos M.2 connector that shares bandwidth Sata Express #1?
If so, is it recommended to unplug any other Sata drive from this Sata port #1 and use a separate Sata port for that device (for not loosing performance under heavy workload where multiple SSD-drives are in use?
Or was the 960 Pro connected to a PCIE 3.0 via adapter?
Please explain this and the possible benefits for one or the other, consider a hefty game GPU connected to PCIE 3.0 x16 slot on a similar mobo (Asus Z97 Deluxe).
Sincerley from Sweden
Billy Tallis - Tuesday, October 18, 2016 - link
The SSD testbed doesn't have a discrete GPU, so all PCIe SSDs are tested in the PCIe 3.0 x16 slot. There's a riser card with the power measurement circuitry between the SSD and the motherboard. M.2 PCIe SSDs are tested in a simple passive PCIe x4 to M.2 adapter card which is plugged in to the power measurement riser card. I'll also be testing the 960 Pro with the Angelbird Wings PX1 adapter and heatsink as I dig deeper into its thermal performance.TheinsanegamerN - Tuesday, October 18, 2016 - link
Cant wait to see that, as it seems the 960 pro is thermally limited more often then not, especially on write tests. Hope to see even bigger improvements.eldakka - Wednesday, October 19, 2016 - link
But but but, since the controller is Polaris, doesn't the SSD handle your graphics too?I'll see myself out now.
BurntMyBacon - Wednesday, October 19, 2016 - link
@eldakkaNo. That would be Fiji. Though, I can see how it would be confusing. Even Ryan thought it was a Polaris 10 chip initially.
http://www.anandtech.com/show/10518/amd-announces-...
Waiting for a Polaris update to the Radeon Pro SSG. Throw some 960s (Polaris controllers) in to replace the 950s and things will get really confusing. ;')
VeauX - Tuesday, October 18, 2016 - link
Would migrating from an old Sandforce base SSD to this provide the same WOW effect than from mechanical to SSDs back in the days?GTRagnarok - Tuesday, October 18, 2016 - link
No, unless what you're doing involves reading or writing many gigabytes of data at a time in which case it'll be noticeably faster. Otherwise, the experience will be very similar compared to old SATA SSDs.AnnonymousCoward - Wednesday, October 19, 2016 - link
I have an absolutely brilliant idea. AT could just test that, and you wouldn't have to wonder and ask in the comments section!Mr Perfect - Tuesday, October 18, 2016 - link
This is kind of a chicken-and-the-egg problem, but has Samsung said anything about releasing these as U.2? Quite a few new motherboards have U.2 ports now, and putting these drives in the larger 2.5 inch form factor would make it possible to solve the overheating issues with heatsinks.Gigaplex - Tuesday, October 18, 2016 - link
It wouldn't be hard for a 3rd party to create a 2.5" adaptor that incorporates a heatsink.Mr Perfect - Wednesday, October 19, 2016 - link
You wouldn't think so, but I had a hell of a time finding one. All said and done, only one manufacturer seems to make an adapter to turn a M.2 into a U.2. Some company called microsatacables.com http://www.microsatacables.com/u2-sff8639-to-m2-pc...Some more native U.2 drives would be nice.
sircod - Tuesday, October 18, 2016 - link
Are you guys doing a review of the 600p? Not quite the same class as the 960 Pro, but I definitely want to see the 960 Evo compared to the 600p.Billy Tallis - Tuesday, October 18, 2016 - link
The Intel 600p is very high on my to-do list. Since it's also using the Microsoft NVMe driver I want to run it through some more tests before publishing the review, but it should be done before I get the 960 EVO.WatcherCK - Tuesday, October 18, 2016 - link
Curious about the operating temperatures of the drive, Im guessing that Destroyer gave it a good thermal workout :) Puget Systems have done some testing on the effect of adding cooling to M2 SSDs:https://www.pugetsystems.com/labs/articles/Samsung...
Aquacomputer also make a PCIe Riser card equipped waterblock for M2 drives for those who want to water cool their SSD, the kryoM.2
Does anyone know if moving an M2 drive to a riser card improves cooling for the drive? I would think there is a little bit of improvement bringing the drive off the surface of the motherboard... How if at all is everyone keeping their motherboard mounted SSDs cool?
BurntMyBacon - Wednesday, October 19, 2016 - link
@WatcherCKYes, moving an M.2 drive to a riser card improves cooling (at least in my system). I checked temperatures mounted on my motherboard with a spot cooler directly on the 950Pro. I then compared it against mounting on a Silverstone ECM20 and an Asus Hyper M.2 X4 Mini (no spot cooler). If you are able to direct good airflow to the riser card, as I was, the Asus riser card is significantly cooler even though it had no spot cooler. If you use a thermal pad on the back of the M.2 drive on the Silverstone EMC20, it may in fact do even better than the Asus, but it my system, the larger standoffs allowed more air to flow behind the M.2 drive on the Asus solution. In any case, I can't seem to induce thermal throttling on my 950Pro in my setup regardless of how hard I try. Again, it will depend on how much airflow you can deliver it.
Gradius2 - Tuesday, October 18, 2016 - link
Those SSDs suffering from HEAT. If you don't put a forced cooling on them, it will DEGRADE the speed no matter what! This is why you won't see much than 3GB/s !iwod - Wednesday, October 19, 2016 - link
Where are these performance heading. Where Do we need to go. The destroyers and heavy definitely dont represent 95% of the consumer users usage. Do we need more higher Random Q1 Read Write? Or do we need higher Seq Read Write? If so then we dont need our firmware and powerful CPU core for QD32. Will that save cost? Why are we still within the 5W power usage? When can we get those dropped down to 2W or less.zodiacfml - Wednesday, October 19, 2016 - link
blazing fast. i will not be able to make useod this speed except for W10 upgradesprofdre - Monday, October 24, 2016 - link
@ Billy Talis: Would it be possible to test on another mainboard? There seems to be a clear bandwidth issue for sequential transfers, as other tech sites as https://www.computerbase.de/2016-10/samsung-ssd-96... achieved almost 3400 MB/s in CrystalDiskMark for the 512 GB model. At first they struggled to reach more than 3100 MB/s. (Samsung values were achieved with CrystalDiskMark according to computerbase).calbear88 - Tuesday, November 15, 2016 - link
Great table in the article summarizing the different Samsung NAND technologies. Here's a summary of the different types of NAND and which products they were in.27nm MLC 830 PM830
21nm MLC 840 pro
21nm TLC 840
19nm MLC XP941
19nm TLC 840 Evo
16nm MLC SM951
16nm TLC 750 Evo PM951
32 layer 86bit v-NAND MLC 850 Pro
32 layer 128bit v-NAND TLC 850 Evo
32 layer 128bit v-NAND MLC 950 Pro
48 layer 128bit v-NAND MLC 960 Pro SM961
48 layer 128bit v-NAND TLC 960 Evo PM961
Meteor2 - Wednesday, November 23, 2016 - link
How is new dies and an entirely new controller 'just a generational refresh of the 950 Pro', from any angle?anaconda1 - Monday, February 13, 2017 - link
The Samsung 2.1 driver has DPC latency issues on my end. I am usign a Samsung 960 500GB Evo nvme and using it with the Native Microsoft Driver (Windows 10 with latest updates installed) or the Samsung 2.0 driver, all is ok, DPC latency reports are fine - Latencymon and latency checker both report trouble free operation. However, installing the 2.1 driver, both Latencymon and dpc latency checker report problems with storport.sys and problems with DPC. Rolling back the driver from the Samsung 2.1 to the Microsoft or the Samsung 2.0, all goes back to normalhansmuff - Wednesday, March 15, 2017 - link
I have to try the 2.0 driver; thanks for your comment! I only installed the 2.1 driver for my 960 pro and it was faster, but DPC went to all shit.WarVance - Friday, October 27, 2017 - link
"...a more thorough comparison of how NVMe drivers and operating system versions affect performance will be coming in the future."Has this been published? I'm very interested in just such an analysis. I recently obtained an Intel 750 400GB card and want to know all about the ideal driver setup under Windows 10 and possibly 7.