Why We Need 3D NAND

For years, it has been known that traditional NAND (i.e. 2D NAND) is running on its last legs. Many analysts predicted that we would not see NAND scaling below 20nm because the reliability would simply be too low to make such a small lithography feasible. However, thanks to some clever engineering on both hardware and firmware sides NAND has scaled to 15nm without any significant issues but now the limit has been reached for real. To understand the limits of 2D NAND scaling, let's say hello to our old good friend Mr. N-channel MOSFET.

Unfortunately the diagram above is a bit too simplified to truly show what we need, so let's look at a real cross-section photo instead:

Let me walk you through the structure first. At the top is the control gate, which is a part of a structure known as a wordline. In a standard NAND design the control gate wraps around the floating gate and the two gates are separated by an insulating oxide-nitride-oxide layer (i.e. ONO), which is sometimes called Inter Poly Dielectric (IPD). Under the floating gate is the tunnel oxide, which is also an insulator, followed by the silicon substrate that acts as the bitline. 

The reason why the control gate is wrapped around the floating gate is to maximize capacitance between the two. As you will soon learn, the capacitance between the gates is the key factor in NAND as it means that the control gate can control the floating gate.

The purpose of bitlines and wordlines can be rather difficult to understand when looking at a cross-section, so here is what it all looks like from the top. Basically, bitlines and wordlines are just lines going in perpendicular directions and the floating gate and other materials reside between them.

When programming a cell, a high voltage of around 20V is applied to the wordline of that cell. Of course, the problem is that you cannot apply voltage to just one cell because the whole wordline will be activated so in order to select a specific cell, the bitline of that cell is held at 0V. At the same time, the neighbouring bitlines are charged to about 6V because this increases the capacitance between the bitline and floating gate, which is turn negates enough of the capacitance between the control and floating gate that the electrons cannot tunnel through the tunnel oxide. This is crucial because if all the bitlines were held at 0V, then all the cells along that wordline would be programmed with the same value. 

To erase a cell, a reverse operation is performed by keeping the wordline at 0V while issuing a ~20V voltage on the bitline, which makes the electrons flow in the opposite direction (i.e. from the floating gate back to the bitline/silicon).

The way NAND is programmed and erased is also its Achilles' Heel. Because such high voltage is needed, the insulators around the floating gate (i.e. ONO and tunnel oxide) wear out as the NAND goes through program and erase cycles. The wear out causes the insulators to lose their insulating characters, meaning that electrons may now be able to escape the floating or get trapped in tunnel oxide during a program or erase. This causes a change in the voltage state of the cell.

Remember, NAND uses the voltage states to define the bit value. If the charge in the floating gate is not what it is supposed to be, the cell may return an invalid value when read. With MLC and TLC this is even worse because the voltage states are much closer to each other, meaning that even minor changes in the voltage state may cause the voltage state to shift from its original position, which means the cell value will also change. Basically, MLC and TLC have less room for voltage state changes, which is why their endurance is lower because a cell that cannot hold its charge reliably is useless.

Now that we have covered the operation of NAND briefly, let's see what this has to do with scaling. Here is the same cross-section as above but with some dimensions attached.

That is what a cross-section of a single cell looks like. When NAND is scaled, all these dimensions get smaller, which means that individual cells are smaller as well as the distance between each cell. The cross-section above is of IMFT's 25nm NAND (hence the bitline length of 25nm), so it is not exactly current generation but unfortunately I do not have any newer photos. There is no general rule to how much the dimensions shrink because 16nm simply means that one of the lengths is 16nm while others may not shrink that much.

The scaling introduces a variety of issues but I will start with the cell size. As the cell size is shrunk, the size of the floating gate is also shrunk, which means that the floating gate is able to hold less and less electrons every time the process node gets smaller. To put this into perspective, Toshiba's and SanDisk's 15nm NAND is stores less than 20 electrons per NAND cell. With TLC, that is less than three electrons per voltage state, so there is certainly not much headroom for escaped electrons. In other words, the cell becomes more vulnerable to the IPD and tunnel oxide wear out because even the loss of one electron can be vital to the voltage state. 

The second issue is the proximity of the cells. The key factor in NAND is the capacitance between the control and floating gate but as the cells move closer to each other through scaling, the neighboring cells will also introduce capacitive coupling. In simpler terms, the neighboring cells will interfere more as the distance between the cells shrinks. The obstacle is that the interference varies depending on the charge of the neighbouring cell, so there is no easy way to exclude the intereference. This in turn makes programming harder and more time consuming because a higher voltage will be needed to achieve the sufficient capacitance between the control and floating gate to make the eletrons tunnel through the oxide. 

The graph above outlines historic rate of how cell to cell intereference increases through die shrinks. At 90nm, the interference was only around 8-9% but at 20nm it is a rather significant 40%. The interference means that 40% of the capacitive coupling comes from the other cells, making it very hard to control the gate you are trying to program or read. Fortunately as a result of some clever engineering (i.e. an airgap between the wordlines), the intererence is only about 25% at 25nm, which is much more managable than the 40% the historic rate would have given us. 

The above can be fairly tough to digest, so let's do a simple analogy that everyone should be able to understand. Imagine that you have a bunch of speakers with each playing a different song. When these speakers are relatively large and far away from each other, it is easy to properly hear the song that the speaker closest to you is playing. Now, what happens if you bring the other speakers closer to the speaker you are listening? The other speakers will interfere and it becomes harder to tell your song apart from the others. If you turn down the volume or switch to smaller speakers with lower output volume, it becomes even harder to distinquish your song from the songs that the other songs that other speakers are playing. If you repeat this enough times, there will be a point when you are hearing your song as unclearly as the other songs.

The effect is essentially the same with NAND scaling. When the cells, or speakers in the analogy, move closer to each other, the amount of interference increases, making it harder to sense the cell or listen to the speaker. At the same time the size of the cell (or speakers) is shrunk, which further complicates the focus on one cell (or speaker). 

That is NAND scaling and its issues in a nutshell. We have seen innovations such as airgaps between the wordlines to reduce cell-to-cell interference and a high K metal gate instead of a traditional ONO IPD to increase control gate to floating gate capacitance, but the limit has now been reached. However, like other semiconductors NAND must follow the Moore's Law in order to get more cost efficient. If you can no longer scale in the X and Y dimensions, what do you do? You hit the reset button and introduce the Z dimension.

Introduction, The Drives & The Test 3D NAND: How It Works
POST A COMMENT

160 Comments

View All Comments

  • beginner99 - Tuesday, July 1, 2014 - link

    Sounds great until you see the price. Even for an enthusiast a crucial MX100 is probably the more reasonable choice. It's half the price... Reply
  • juhatus - Tuesday, July 1, 2014 - link

    Yeah, the most important factor of a SATA SSD is the price, not how it will saturate the 550Mb limit in a scenario that will never happen for most consumers. Now even a mention on the first page, eh? Please put a little pressure for the manufacturers to move on to M.2 and NVME.

    **In the Midnight hour, She cried more, more, more!!**
    Reply
  • boogerlad - Tuesday, July 1, 2014 - link

    Are you guys going to review the SM1715? Really curious to see how it stacks up against the Intel, especially in client workloads. Reply
  • pesho00 - Tuesday, July 1, 2014 - link

    I realy hoped to see 2,3,4 TB drives with this technology :(
    But we will weith ;)
    Nice drive, not so nice price ;)
    Reply
  • MrSpadge - Tuesday, July 1, 2014 - link

    Not much point in offering them now if you consider the price of the 1 TB version. Reply
  • Samus - Tuesday, July 1, 2014 - link

    Those Koreans...they're just killer engineers. Reply
  • trumanhw - Tuesday, July 1, 2014 - link

    I don't get it. I get about 505 read and 495 write on Evo. This goes up by TEN percent in performance and then all but DOUBLES the price?

    I'm confident someone here has understanding of this I'm missing--please reply and just point me to the parts that change my view.

    Thanks
    Reply
  • hojnikb - Tuesday, July 1, 2014 - link

    Your EVO has worse endurance, lower IOPS, lower consistency and most of all lower Write speeds (495MB/s write is due to trick called turbowrite).
    Sequential speeds are only a part of the story.
    Reply
  • emn13 - Tuesday, July 1, 2014 - link

    I always wonder who this kind of drive is supposed to be aimed at. Yes, it's fast; but the competition is fast enough that I really can't imagine anyone *noticing* the difference outside of artificial extremely heavy non-stop I/O. And I don't mean light-load non-stop, I mean I/O's maxed out non-stop so the drives can't take time to garbage collect.

    So... a power user is unlikely to notice much difference outside of short bursts of high-I/O apps, and as the rather heavy 2011 light load demonstrates, the drives are already maxing out there too. I just can't think of a real-world load where a human being would notice the performance difference and care about it; that would imply a performance difference of around a factor 2.

    So we're left with a super fast drive (good for bragging rights, but what else?), but some apparently intentionally missing features like power-loss protection. Why would even a heavy user choose this over, say, samsungs own 840 EVO, or crucial's MX100/M500?

    I just don't see the value here. To me this looks like microoptimization and losing sight of the bigger picture.

    Price matters. Features matter. Performance - only until you're fast enough.
    Reply
  • hojnikb - Tuesday, July 1, 2014 - link

    Kinga agree with this. If you REALLY need every bit of performance, you won't be looking for SATA drives at all.
    This probobly has a nieche.
    But mainstream drives are really fast enough these days for most people.
    Reply

Log in

Don't have an account? Sign up now