Building on their recent announcement of PCIe 5.0 retimers, Microchip has announced their first PCIe 5.0 switches, as part of their Switchtec PFX product line. On paper these look like a very straightforward update to their existing Switchtec PFX switches for PCIe 4.0, carrying over all the important features but doubling the speed.

The final version of the PCI Express 5.0 specification was released in May 2019, but significant adoption is not expected to begin until Intel's Sapphire Rapids Xeon processors ship, planned for later this year. Microchip is positioning themselves to be one of the most important vendors helping enable the transition, and they expect to be the only company offering both switches and retimers for PCIe 5.0. Components like switches and retimers are becoming increasingly important with each iteration of PCIe as higher speeds are achieved at the cost of range; servers using PCIe 5.0 will only be able to put a handful of devices close enough to the CPU to operate at PCIe 5.0 speeds without some kind of repeater. Retimers like Microchip's XpressConnect parts are simple pass-through repeaters, while switches like the new Switchtec PFX parts can fan out PCIe connectivity from one or more uplink ports to numerous downstream ports.

As with the PCIe 4.0 members of the Switchtec PFX product line, the new PCIe 5.0 switches will be available with lane counts from 28 to 100. These switches support port bifurcation down to x2 links, with bifurcation down to x1 supported by some of the lanes on the switch. The switches also support up to 48 Non-Transparent Bridges (NTBs), allowing for large multi-host PCIe fabrics to be assembled using several switches. However, initial demand for PCIe is expected to center around GPUs, machine learning accelerators and high-speed NICs, so many of those advanced features will be underutilized early on, and the chips will be primarily used to feed those extremely bandwidth-hungry peripherals with an x16 link each. SSDs using just two or four lanes each are expected to be slower about moving to PCIe 5.0.

The new PCIe 5.0 Switchtec PFX switches are currently sampling to select customers, including a development/evaluation board based around the 100-lane switch. Microchip wouldn't disclose any pricing information for the new switches, but they are bound to be more expensive than the PCIe gen4 switches with the same lane counts. Power consumption is also going up, but Microchip wouldn't quantify the change.

Microchip's lineup of PCIe switches for earlier generations also includes the Switchtec PSX and PAX families with more advanced functionality than the PFX switches. PCIe 5.0 versions of the PSX and PAX families have not been announced, but it's normal for those versions to come later. Microchip's only competition for leading-edge PCIe switches comes from Broadcom/PLX PEX switches. Broadcom has not yet publicly announced their PCIe 5.0 switches, but they are doubtless also planning to take advantage of the launch of Intel's Sapphire Rapids platform.

Source: Microchip

POST A COMMENT

40 Comments

View All Comments

  • Arsenica - Wednesday, February 3, 2021 - link

    Oh PCIe 5.0, yet another of the technologies that Intel foolishly pursued for the Exascale Aurora supercomputer just to have AMD deliver a 50% faster supercomputer at least 6 months before Intel even starts assembly it. Reply
  • RogerAndOut - Thursday, February 4, 2021 - link

    AMD has good reason to consider PCIe 5.0 (and 6.0). InfinityFabric/Infinity Architecture communications takes place over PCI-e lanes. So the desktop CPUs and GPUs are likely to gain a feature that is not currently needed because support is added for data center EPYC and GPU-based systems.

    This was first seen in the move from PCIe 3 to PCIe 4 when EPYC 7002 Series was shipped. The doubling of the speed of the Infinity Fabric link between 2 processors was so much that DELL has a system where only 48 (rather than 64) PCIe lanes from each CPU were allocated to the link and the remaining 32 (16 from each CPU) used for other tasks. Resulting in a system with 160 lanes for general IO. Why so many? Well it does support 24x NVMe SSDs so that it 96 lanes allocated to start.

    The other advantage for AMD is that their off-chip PCIe implementation is a function of the I/O die and not their CPU core(s) so if they have a reason to they can mix and match features depending on what they wish to release.
    Reply
  • COtech - Thursday, February 4, 2021 - link

    With distance on the motherboard becoming increasingly important would it make sense to use both sides of the motherboard to place some resources closer to the CPU ? Reply
  • Tomatotech - Thursday, February 4, 2021 - link

    Yes. Many motherboards already have a NVME drive slot on the bottom. It's not a perfect location but it makes sense going forward, especially if the manufacturers want two PCIe 5 NVME drive ports but only have 1" of distance to locate them. Reply
  • back2future - Thursday, February 4, 2021 - link

    On evaluation of JEDEC and Micron specifications and memory emulation on 3.6Gb/s (clock rate 1.8GHz) it was explained, that for in spec functionality voltage difference for high/low signal being within 700mV, timing for setup and hold of that bit has to be within a 62+87 ps since clock inversion. That is a distance of around 1.75" done on speed of light.
    Insertion losses when signal changes pcb layers can vary from 65 to 5dB for 120-160mil to below 60mil. That's for DDR4-3600 3.6GT/s 25.6Gb/s standards. DDR5-8400 525MHz*64bit/8*16 = 2× 33,6 GB/s on 4.2GHz clock rate. PCIe4 gets 16GT/s, PCIe5 32GT/s and data signal jitter termination for DDR4 is required for getting jitter to 1/5 influence compared to without ... on a real numbers from around 130ps to 35ps at 3.6GT/s.
    That's some kind of art on this business and knowledge on materials.
    Reply
  • back2future - Thursday, February 4, 2021 - link

    Sorry, meant: DDR4-3600 3.6GT/s 25.6GB/s standards Reply
  • back2future - Thursday, February 4, 2021 - link

    ... considering signaling termination power, have a look on current PCIe4 NVME heat sinks, appearing like almost all above 1/2" in height, https://images.anandtech.com/doci/16458/Q140_678x4... close to M.2 connector, rev 1.1 (or up to theoretical rev 4.0, 12/2020, already)? Reply
  • Wizard2021 - Wednesday, February 10, 2021 - link

    20 years is a long time to get out that PCIE 5.0
    20 years of enough SLEEP for you ??
    Well at least you got it out!
    It about time
    But way too late!
    Good job anyway
    What your next job PCI E 6.0 ??
    Need Extra Sleep for another 20 years ???
    Reply
  • npz - Tuesday, February 16, 2021 - link

    > The switches also support up to 48 Non-Transparent Bridges (NTBs), allowing for large multi-host PCIe fabrics to be assembled using several switches. However, initial demand for PCIe is expected to center around GPUs, machine learning accelerators and high-speed NICs, so many of those advanced features will be underutilized early on ...

    Not just underutilized, but also have botched implementations because of such under-utilization too It's been that way for over a decade from my own experience. Intel's own platforms concerning the pcie root complex on the cpu and the RC on their PCH is buggy with erratas galore, some fixed, many not and requiring OS/driver workarounds for features in the PCIE spec that aren't widely used.
    Reply
  • npz - Tuesday, February 16, 2021 - link

    Forgot to add to that PLX was also another bad culprit at least back before its acquisitions. Have no experience with Microchip so I can only hope they thoroughly test and debug all features they claim to support from the spec Reply

Log in

Don't have an account? Sign up now