Today AMD has added two new processors into the EPYC lineup: the EPYC 7662, its fifth 64-core CPU for applications that need loads of cores, as well as the EPYC 7532, a 32-core CPU for programs that can take advantage of a large L3 cache. Dell and Supermicro are already signed up to use these new chips, and other system builders will be sure to follow.

Like AMD's other 64 core EPYC processors, the EPYC 7662 processor is a 64 core part with 256 MB of L3 cache. Intended to serve as a cheaper 64 core option for customers, the new processor comes in a tier lower than AMD's existing chips, and fittingly it has the lowest clockspeeds with a base clock of just 2.0 GHz, while the chip can boost to 3.3 GHz. Meanwhile, the TDP is rated for 225W, which is typical for many higher-end EPYC SKUs, but also higher than the 200W 7702 above it. In essence, we're looking at a less power efficient SKU for those parties who want to save some money on hardware at the cost of greater cooling needs and power consumption.

Meanwhile AMD's other new chip is the 32 core EPYC 7532. This chip is clocked at 2.4 GHz base while turboing to 3.3 GHz; but more importantly, it offers something not found on any other 32 core EPYC SKU: 256 MB of L3 cache. This allows the 7532 to fill the large cache niche that AMD and other server vendors always produce an SKU or two for, with the souped-up chip offering 8 MB of L3 cache per core instead of the normal 4 MB. Depending on the workload, a large cache configuration can help a program maximize its performance in cache sensitive applications, such as ANSYS CFX benchmarks, as well as its single-threaded/lightly-threaded performance in general that otherwise won't benefit from more cores. The catch for AMD, in turn, is that building a 256 MB L3 SKU requires eight chiplets no matter how many cores it has, so the 7532 is still a full-chiplet design, just with half of the CPU cores disabled..

AMD EPYC 7001 & 7002 Processors (2P)
Frequency (GHz) L3 TDP Price
Base Max
EPYC 7H12 64 / 128 2.60 3.30 256 MB 280 W ?
EPYC 7742 64 / 128 2.25 3.40 256 MB 225 W $6950
EPYC 7702 64 / 128 2.00 3.35 256 MB 200 W $6450
EPYC 7662 64 / 128 2.00 3.30 256 MB 225 W ?
EPYC 7642 48 / 96 2.30 3.20 256 MB 225 W $4775
EPYC 7552 48 / 96 2.20 3.30 192 MB 200 W $4025
EPYC 7542 32 / 64 2.90 3.40 128 MB 225 W $3400
EPYC 7532 32 / 64 2.40 3.30 256 MB 200 W ?
EPYC 7502 32 / 64 2.50 3.35 128 MB 200 W $2600
EPYC 7452 32 / 64 2.35 3.35 128 MB 155 W $2025
EPYC 7402 24 / 48 2.80 3.35 128 MB 155 W $1783
EPYC 7352 24 / 48 2.30 3.20 128 MB 180 W $1350
EPYC 7302 16 / 32 3.00 3.30 128 MB 155 W $978
EPYC 7282 16 / 32 2.80 3.20 64 MB 120 W $650
EPYC 7272 12 / 24 2.90 3.20 64 MB 155 W $625
EPYC 7262 8 / 16 3.20 3.40 128 MB 120 W $575
EPYC 7252 8 / 16 3.10 3.20 64 MB 120 W $475

Like all the latest AMD EPYC processors, the new CPUs also feature 128 PCIe 4.0 lanes, support up to 4 TB of DDR4-3200 DRAM support, and have robust security capabilities.

Dell and Supermicro will be the first companies to use AMD’s EPYC 7662 and EPYC 7532 processors in their PowerEdge R6515, R7515, R6525, R7525, and C6525 as well as A+ and Big Twin machines.

Related Reading:

Source: AMD

Comments Locked


View All Comments

  • austinsguitar - Wednesday, February 19, 2020 - link

    256MB of cache on a cpu... i had less in my first pc :) what i time to be alive man.
  • austinsguitar - Wednesday, February 19, 2020 - link

    less deditated wam. woops
  • FreckledTrout - Thursday, February 20, 2020 - link

    LOL You need an edit button. -wam

    I had a 40MB hard drive in my first 286 PC.
  • eek2121 - Wednesday, February 19, 2020 - link

    My first dozen or so machines had less RAM, and I've always overspent on RAM.

    On a more interesting note, this tells me that AMD is willing to play with L3 cache configurations for the Zen platform. I wonder if Zen 3 will bring us varying levels of cache?
  • Santoval - Wednesday, February 19, 2020 - link

    Zen 3 apparently ditches the CCX design (or makes the entire 8-core chiplet a "CCX" only in name) and unifies the L3 cache of the chiplet. This means that all 8 cores will have roughly the same latency when they access the entire L3 cache, which is the case with Intel's L3 cache - at least up to 8 or 10 cores - as well.

    As a trade-off this scheme might limit the available cache options, though that would not affect Epyc. As we can see in the table of the article Epyc's L3 cache is available in multiples of 64 MB, i.e. the cache of a pair of chiplets. AMD could offer up to multiples of 32 MB, though that would arguably be quite an excessive .. cache variety.

    Therefore, assuming Zen 3 will still have 32 MB of (unified) L3 cache per chiplet, even if that cache's size could not be configured (it probably will) that wouldn't matter to Epyc. What matters more is that the ditching of CCX will be a boon for multi-thread performance.
  • Hul8 - Wednesday, February 19, 2020 - link

    I believe Ryzen CPUs can't access L3 cache on other CCXes (each Zen 2 core only sees the CCX-local 16MB), so it would be true even today that "all cores have the same latency to the L3 (that they have access to)".

    The Zen 2 L3 cache is comprised of 4 x 4MB slices, any number of which can be disabled so the theoretical flexibility is there already.

    Zen 2 needs the 16MB L3 per CCX for performance reasons, because that's all each core will see. Once you combine two CCXes, it might make more sense to tone down the L3 size per die to something like 24MB in order to keep latencies in check (larger cache has higher latency). If the die ends up having 24MB+ of L3, all but the highest-end consumer parts could well have some of that fused off.
  • DanNeely - Wednesday, February 19, 2020 - link

    The L1 cache on a cheap phone SoC has more ram than my first 2 computers.
  • PeachNCream - Thursday, February 20, 2020 - link

    VIC-20 for me with RAM upgraded to 32KB that I got second hand after the previous owner purchased a new C64. Good times on that old clunker though the keyboard ergonimics left a lot to be desired. Compared to the manual typewriter I was using before that where my fingers once in a great while got stuck between the keys, it was an improvement.
  • prisonerX - Friday, February 21, 2020 - link

    32K? Luxury. I had only 4 or 8K in my first TRS-80. And by "my" I mean the display unit in the Tandy Electronics store I used to hang around in as a kid.
  • yeeeeman - Thursday, February 20, 2020 - link

    You can thank tsmc and especially apple for that. The 7nm process is a result of great r&d on tsmc part and a great partnership with apple, pushing them on moving faster to each node by providing the cash to buy equipment and give precious feedback for each node.

Log in

Don't have an account? Sign up now