Last year, Intel and Samsung proposed new form factors for enterprise/datacenter SSDs with the goal of overcoming the shortcomings of the existing M.2, U.2 and PCIe add-in card form factors. Samsung's NF1 form factor (previously known as NGSFF) had little presence at Flash Memory Summit this year without Samsung's participation in the show, but the derivatives of Intel's Ruler design were all over the place. Unfortunately, the Ruler has spawned not just one industry standard, but a whole family of new form factors.

Working with SFF, the committee originally formed to standardize the 2.5" hard drive form factor, Intel's Ruler has led to the Enterprise and Datacenter SSD Form Factor (EDSFF) family of standards.

Why new form factors?

The existing form factor options for enterprise SSDs have proven inadequate for datacenter needs. It is increasingly common for servers to use several types of SSD (boot drive, performance tier, capacity tier), and that usually requires using more than one SSD form factor. Each has its own downsides:

2.5" SATA, SAS, U.2: Drives with the same 7mm thickness that consumer SATA drives use are relatively limited in maximum PCB area for NAND flash packages, and internal volume for power loss protection capacitors. Increasing the thickness up to 15mm allows for bulky capacitors and two PCBs stacked inside the drive's case, but this severely compromises the ability to cool the drive. Backplanes for 2.5" drives tend to be a severe airflow obstruction.

PCIe add-in cards: Half-height half-length (HHHL or MD2) cards have plenty of PCB surface area for large amounts of flash and heatsinks that can handle 40W or more. Full-height cards increase these limits even more. This is the only current option for PCIe x8 or wider interfaces. Hot-swapping is possible with many cards, but this does little good when the cards are not accessible from the front of the server.

M.2: Enterprise SSDs typically use the M.2 22110 card size that is longer than the 2280 card used by client/consumer drives. The extra space allows for higher drive capacities or power loss protection capacitors. However, the power and thermal limits are still severely constraining. Delivering more than 8W with only a 3.3V supply requires careful system design to ensure that enough current can be provided without the voltage dropping out of the required 5% tolerance. Dissipating 8-12W usually requires heatsinks that detract from the density advantage of such a small form factor. Hot-swapping is only possible by installing M.2 drives in some form of carrier module that further inflates the space occupied by each drive.


The SFF-TA-1002 Connector

The EDSFF family of SSD form factors share a common connector standard, which has also been adopted by the GenZ interconnect and the Open Compute Project's latest NIC form factor. The SFF-TA-1002 connector standard defines a multi-lane card-edge connector with a much smaller contact pitch than PCIe add-in card slots. Three sizes are defined, allowing for PCIe x4, x8 and x16 links, and narrower sockets can accept longer cards thanks to the notches in the card edge connector. The connector provides adequate signal integrity for data rates of at least 56 GT/s with a NRZ encoding (as used by PCIe) so it is more future-proof than most implementations of the existing connector standards for PCI Express signals. The smallest 1C variant allowing for PCIe x4 and power is 23.88mm wide, about the same size as a M.2 connector. The widest 4C variant supporting PCIe x16 links is 57.02mm, compared to about 89mm for a standard PCIe x16 slot.

SFF-TA-1002 Connector and Card compatibility

More important than the compact size of the connector is how a backplane full of them is constructed. The EDSFF form factors are intended to be used with right-angle style connectors that result in the backplane PCB being parallel to the bottom of the server with air flowing across it, instead of being oriented to block airflow and requiring holes to be cut in the PCB to get effective cooling. The end result is that a 1U EDSFF drive bay for drives up to 8mm thick allows more airflow than a typical 2U bay for 2.5" drives. There are still surface-mount host side connectors that would be used with a more typical backplane arrangement, but the airflow advantages of the right-angle connectors are very compelling.

The actual pin-out is defined in SFF-TA-1009, which also specifies a 12V supply providing up to 70W per slot, plus a 3.3Vaux supply. This is close to the 75W that PCIe x16 slots can provide, and far more than the typical limits for any other drive form factor. The EDSFF form factors also specify that status LEDs are to be provided by the drive itself, controlled through a dedicated pin on the drive connector rather than being a component on the backplane shining through light tubes on a drive caddy.


EDSFF 1U Short and Long Rulers

Intel's original Ruler proposal was targeted at 1U servers, with the intention of allowing for at least 32 drives to be mounted vertically in the front of a server. The first Rulers that Intel showed off were over 12" long, requiring drive cages far deeper than for any existing form factor. Many servers designs don't want or need to dedicate that much internal volume to storage, so a shorter variant has also been standardized and seems likely to be more popular. The EDSFF 1U Short form factor is defined in SFF-TA-1006 and the 1U Long version is in SFF-TA-1007. The 1U Short is most similar to M.2 and Samsung's NF1 form factor. Like NF1, the 1U Short form factor features a wider card than M.2, allowing for two rows of NAND flash packages. 1U Short drive thickness is limited to 5.9mm or 8mm with a heatspreader, and this form factor is intended for drives up to about 12W. 1U Short drives have mounting holes in the corners and need to be installed in caddies for use in a typical hot-swap bay. Up to 36 of these drives can fit into the front of a 1U server.

The 1U Long form factor is more than just a stretch of the 1U Short. The corner mounting holes are replaced by the expectation that the drive include its own case and latching mechanism at the front. Two thicknesses are defined: 9.5mm for drives up to 25W, and 18mm for drives up to 40W. While 2.5" drives that thick often use two stacked PCBs, the extra thickness of 1U Long drives is intended solely for heatsink fins and both sides of the drive have the same clearance for their heatsinks.

EDSFF 3" (2U) Form Factors

Moving back toward traditional drive form factors, a set of 3" drive form factors has been defined, allowing for vertical mounting in a 2U server or horizontal mounting in a 1U server. The two lengths correspond roughly to that of 2.5" and 3.5" hard drives and are intended to allow for hot-swap cages of the same overall sizes. Drives can be either 7.5mm or 16.8mm thick, slightly thicker than the most common thicknesses for 2.5" drives. Like 2.5" and 3.5" drives, the EDSFF 3" form factors include a full casing around the drive, with the PCB mounted almost flush against one side of the drive instead of centered as in the 1U form factors.

source: SFF-TA-1008 revision 1.0

Hot-swap caddies are needed to provide a latching mechanism, but like the 1U form factors the 3" drives provide their own status LEDs. The drive-to-drive spacing is defined such that the two thicknesses can be mixed, and a 16.8mm thick drive can be inserted in place of two 7.5mm thin drives. A 1U server can fit 20 of the thinner 7.5mm drives, arranged as five stacks of four drives. Using 2.5" SSDs it is hard to achieve this density because of the bulkier connector. A 2U server with the drives in a vertical orientation can fit about 44 of the thinner drives.

The recommended power and thermal limits for the 3" form factors varies from 20W for the short/thin size up to 70W for the long/thick size. The drives can use any of the three connector sizes, supporting up to PCIe x16.

SSD Form Factor Comparison
Form Factor Approximate
Dimensions (mm)
Typical SSD
Power Limit
2.5" U.2 70 100 7-15 25 W (15mm)
3.5" 102 147 26  
PCIe HHHL 68 168 19 40-75 W
M.2 22110 22 110 5 8.25 W
EDSFF 1U Short 32 112 6-8 12 W
EDSFF 1U Long 38 319 9.5 25 W
38 319 18 40 W
EDSFF
3", 7.5mm
Short 76 105 7.5 20 W
Long 76 142 7.5 35 W
EDSFF
3", 16.8mm
Short 76 105 16.8 40 W
Long 76 142 16.8 70 W

Almost all of the EDSFF hardware on display at Flash Memory Summit was using the 1U drive sizes. It appears that the 1U Short size will be the most popular, and the 1U Long will mostly be reserved for storage-oriented servers and JBOF enclosures. However, Intel did demonstrate a 1U Long accelerator card using a large Altera FPGA, taking advantage of the extra power and cooling provided by the thicker 1U Long variant.

The 3"/2U form factors are most likely to be used by servers designed to support a flexible mix of storage and compute/accelerator modules. The thicker 3" form factors could also potentially expose I/O connectors on the front of the server for things like network cards, but the current standard doesn't have anything to say about such applications.

1U Long quad M.2 carrier board with Marvell 88NR2241 NVMe Switch

Microsoft Azure has been using pre-standard 1U Long Ruler cards as carriers for 4 M.2 drives in some of their systems, but the complexity makes those merely a short-term transitional solution. They have qualified one native EDSFF 1U Long SSD and put it into production, and are working on qualifying three more drive designs in that form factor. They are also looking into using the largest 3" form factor as a replacement for PCIe HHHL cards. HPE is considering adopting the 3" form factor for their next generation of servers, and many other vendors are keeping an eye on the EDSFF standards but have been less specific about their plans.

The servers on display at FMS did not all use EDSFF form factors exclusively. We saw a 1U server with traditional 2.5" bays, but the last two bays had been replaced by a 6-drive bay for EDSFF 1U Short drives.

Comments Locked

26 Comments

View All Comments

  • Billy Tallis - Friday, August 17, 2018 - link

    It's never going to happen. M.2 is limited to x4, so nobody is going to make a consumer-oriented controller that supports x8, especially not with the transition to PCIe gen4 starting (sooner for SSD controllers than for x86 CPUs). Also, there are plenty of motherboards that offer x8/x4/x4 splits.
  • dark4181 - Friday, August 17, 2018 - link

    I'm with Billy on this, especially with the shift to NVMe and next gen media.Since NVMe doesn't use a separate controller, but goes through the CPU. I think eventually we'll see SFF-TA-1006 filter all the way down (they're about the same dimensions as M.2). Gen Z will solve the PCIe issue you mentioned. As they said in the article, a single Gen Z lane carries more bandwidth than PCIe3 (and 4, too).
  • edzieba - Saturday, August 18, 2018 - link

    "Since NVMe doesn't use a separate controller, but goes through the CPU"

    NVMe SSDs still need a controller.
  • Billy Tallis - Saturday, August 18, 2018 - link

    NVMe SSDs don't need to connect through a host bus adapter like SAS and SATA SSDs. Everything needs a controller on the drive itself, but only PCIe SSDs can connect directly to the CPU.
  • iwod - Saturday, August 18, 2018 - link

    We have a roadmap reaching 10PB per U in 2025, that is 420PB in one Rack. Compared to HDD Roadmap of 0.48PB per U in 2025.

    At what point, do the density, power, speed, latency improvement cross the threshold that it makes sense to go all NAND.
  • surt - Saturday, August 18, 2018 - link

    Depends on who you are and your workload. My org hasn't bought a spinning disk in 2 years.
  • Xajel - Saturday, August 18, 2018 - link

    All I hope is a next generation consumer storage cables (SATA successor) will be able to support both SATA and PCIe storage and includes power in the same cable at least for regular NVMe & SATA SSD's..

    And to make it compact and tidy not like those stupid drive-side U.2 connectors.. or motherboad side PCIe Express slot.
  • eek2121 - Saturday, August 18, 2018 - link

    IMO the sizes and connector should be standardized across server as well as workstation/desktops. That will bring the lowest pricing for all. Of course, we know that'll never happen...
  • TrevorH - Saturday, August 18, 2018 - link

    The nice thing about standards is that you have so many to choose from. (Andrew S. Tanenbaum)
  • wumpus - Tuesday, August 21, 2018 - link

    You didn't think Intel could get away with "One Ruler to rule them all" did you? I'd think twice before fallowing Intel's lead in standards, they aren't nvidia, but they love nothing more than to own the market.

Log in

Don't have an account? Sign up now