Gracemont Microarchitecture (E-Core) Examined

The smaller core as part of Intel’s hybrid Alder Lake design is called an E-core, and is built on the Gracemont microarchitecture. It forms part of Intel’s Atom family of processors, and is a significant microarchitectural jump over the previous Atom core design called Tremont.

  • 2008: Bonnell
  • 2011: Saltwell
  • 2013: Silvermont
  • 2015: Airmont
  • 2016: Goldmont
  • 2017: Goldmont Plus
  • 2020: Tremont
  • 2021: Gracemont

Based on Intel’s diagrams, the company is pitching that the size of its Golden Cove core means that in the space it would normally fit one of its P-core designs, it can enable a four core cluster of E-cores along with a shared 4MB L2 cache between them.

For performance, Intel has some pretty wild claims. It splits them up into single thread and multi-thread comparisons using SPECrate2017_int.

When comparing 1C1T of Gracemont against 1C1T of Skylake, Intel’s numbers suggest:

  • +40% performance at iso-power (using a middling frequency)
  • 40% less power* at iso-performance (peak Skylake performance)

*'<40%' is now stood to mean 'below 40 power'

When comparing 4C4T of Gracemont against 2C4T of Skylake, Intel’s numbers suggest:

  • +80% performance peak vs peak
  • 80% less power at iso performance)peak Skylake performance

We pushed the two Intel slides together to show how they presented this data.

By these graphics it looks like that for peak single thread, we should see around +8% better than Skylake performance while consuming just over half the power – look for Cinebench R20 scores for one Gracemont thread around 478 then (Skylake 6700K scored 443). With +8% for single thread in mind, the +80% in MT comparing 4 cores of Gracemont to two fully loaded Skylake cores seems a little low – we’ve got double the physical cores with Gracemont compared to Skylake here. But there’s likely some additional performance regression with the cache structure on the new Atom core, which we’ll get to later on this page.

These claims are substantial. Intel hasn’t compared the new Atom core generation on generation, because it felt that having AVX2 support would put the new Atom at a significant advantage. But what Intel is saying with these graphs is that we should expect better-than Skylake performance at much less power.  We saw Skylake processors up to 28 cores in HEDT – it makes me wonder if Intel might not enable its new Atom core for that market. If that’s the case, where is our 64-core Atom version for HEDT? I’ll take one.

Front End

The big item about the Tremont front end of the core was the move to dual three-wide decode paths, enabling two concurrent streams of decode that could support 3 per cycle. That still remains in Gracemont, but is backed by a double-size 64 KB L1 Instruction cache. This ties into the branch predictor which enables prefetchers at all cache levels, along with a 5000-entry branch target cache which Intel says in improved over the previous generation.

Back on the decoder, Intel supports on-demand decode which stores a history of previous decodes in the instruction cache and if recent misses are recalled at the decode stage, the on-demand stream will pull directly from the instruction cache instead, saving time – if the prefetch/decode works, the content in the instruction cache is updated, however if it is doing poorly then the scope is ‘re-enabled for general fetches’ to get a better understanding of the instruction flow. This almost sounds like a micro-op cache without having a physical cache, but is more to do about instruction streaming. Either way, the decoders can push up to six uops into the second half of the front end.

For Gracemont, the reorder buffer size has increased from 208 in Tremont to 256, which is important aspect given that Gracemont now has a total of 17 (!) execution ports, compared to eight in Tremont. This is also significantly wider than the execution capabilities of Golden Cove's 12 ports, related to the disaggregated integer and FP/vector pipeline design. However, despite that width, the allocation stage feeding into the reservation stations can only process five instructions per cycle. On the return path, each core can retire eight instructions per cycle.

Back End

So it’s a bit insane to have 17 execution ports here. There are a lot of repeated units as well, which means that Gracemont is expecting to see repeated instruction use and requires the parallelism to do more per cycle and then perhaps sit idle waiting for the next instructions to come down the pipe. Overall we have

  • 4 Integer ALUs (ALU/Shift), two of which can do MUL/DIV
  • 4 Address Generator Units, 2 Load + 2 Store
  • 2 Branch Ports
  • 2 Extra Integer Store ports
  • 2 Floating Point/Vector store ports
  • 3 Floating Point/Vector ALUs: 3x ALUs, 2x AES/FMUL/FADD, 1x SHA/IMUL/FDIV

It will be interesting to see exactly how many of these can be accessed simultaneously. In previous core designs a lot of this functionality would be enabled though the same port – even Alder Lake’s P-core only has 12 execution ports, with some ports doing double duty on Vector and Integer workloads. In the P-core there is a single scheduler for both types of workloads, whereas in the E-core there are two separate schedulers, which is more akin to what we see on non-x86 core designs. It’s a tradeoff in complexity and ease-of-use.

The back-end is support by a 32 KiB L1-Data cache, which supports a 3-cycle pointer chasing latency and 64 outstanding cache misses. There are two load and two store ports, which means 2x16 byte loads and 2 x 16 byte stores to the L1-D.

There is also has a shared 4 MB L2 cache across all four E-cores in a cluster with 17-cycle latency. The shared L2 cache can support 64 bytes per cycle read/write per core, which Intel states is sufficient for all four cores. The new L2 supports up to 64 outstanding misses to the deeper memory subsystem – which seems fair, but has to be shared amongst the 4 cores.

Intel states that it has a Resource Director that will arbitrate cache accesses between the four cores in a cluster to ensure fairness, confirming that Intel are building these E-cores in for multi-threaded performance rather than latency sensitive scenarios where one thread might have priority.

Other Highlights

As the first Atom core to have AVX2 enabled, there are two vector ports that support FMUL and FADD (port 20 and port 21), which means that we should expect peak performance compared to previous Atoms to be substantial. The addition of VNNI-INT8 over the AVX unit means that Intel wants these E-cores to enable situations where high inference throughput is needed, such as perhaps video analysis.

Intel was keen to point out that Gracemont has all of its latest security features including Control Flow Enhancement Technology (CET), and virtualization redirects under its VT-rp feature.

Overall, Intel stated that the E-cores are tuned for voltage more than anything else (such as performance, area). This means that the E-cores are set to use a lot less power, which may help in mobile scenarios. But as mentioned before on the first page, it will depend on how much power the ring has to consume in that environment - it should be worth noting that each four core Atom cluster only has a single stop on the full ring in Alder Lake, which Intel says should not cause congestion but it is a possibility – if each core is fully loaded, there is only 512 KB of L2 cache per core before making the jump to main memory, which indicates that in a fully loaded scenario, that might be a bottleneck.

Golden Cove Microarchitecture (P-Core) Examined Instruction Sets: Alder Lake Dumps AVX-512 in a BIG Way
Comments Locked

223 Comments

View All Comments

  • mode_13h - Friday, August 20, 2021 - link

    > they can't exactly prevent open source software from developing

    Well, they don't have to release all of the information that would be needed for Linux to do the same thing.

    > Linux will probably have better support than Windows itself

    Intel is one of the largest contributors to the Linux kernel. No doubt, any support for their Thread Director will be developed by them.
  • Obi-Wan_ - Thursday, August 19, 2021 - link

    Is it likely that Alder Lake will consume noticeably less power when near idle or during video playback/streaming, or are existing CPUs already quite efficient in these cases?

    I'm thinking an HTPC that should be as silent as possible when idling and streaming, but also have a high power budget (effectively noise budget) when gaming for example.
  • mode_13h - Friday, August 20, 2021 - link

    If you really care about minimizing idle power, then I think you probably need to use LPDDR memory. Integrated graphics should also be a priority.

    Another thing people miss is that the PSU should not only be a high-efficiency model, but also not heavily over-spec'd. Power supplies lose a lot of efficiency, when you run them well below peak load.
  • mode_13h - Thursday, August 19, 2021 - link

    > The desktop processor will have sixteen lanes of PCIe 5.0

    I'll believe it when I see it. Let's not forget that it took Intel 2 generations to get PCIe 4.0 working! They had to reverse course on enabling it it Comet Lake, and that was years after POWER, Ryzen, and some ARM CPUs had it.

    I also don't see the value of having it now, given that we know DG2 is going to be only PCIe 4, nor are we aware of any other upcoming GPUs that will support 5.0.
  • Bp_968 - Sunday, August 22, 2021 - link

    Pcie 5 is twice as fast and backwards compatible. Why would you *not* want it in your system if possible? Its not like you can add it later.

    Pcie4 is mostly useless for gpus already (regardless if they "support" it or not) so pcie5 isnt going to improve anything gpu wise just like it didnt improve with pcie4.

    But where it *will* be an improvement is in peripherals (the x4 channel to the chipset just got twice as fast) and support for pcie5 storage. Oh and easier support for high speed interconnects like USB 3-4 and 10gb ethernet.

    Personally id prefer the layout be different. X16 or X8/x8 (plus 2 x4 slots for nvme i think) for the pcie5 is ok, but on the pcie4 side I'd like to see x16, x8/x8 as an option as well. That way you could use the pcie5 slots on other stuff and put the gpu in a x16 or x8 pcie4 slot (and it perform just as well). Support for 4+ nvme drives will be nice in the future. One or two pcie5 high speed units and then spots for slower SSDs for mass storage (x2 or x4 pcie4 being the "slow" slots).
  • mode_13h - Monday, August 23, 2021 - link

    > Pcie 5 is twice as fast and backwards compatible.
    > Why would you *not* want it in your system if possible?

    Mainly due to board and peripheral costs, I think. Beyond that, power dissipation should be well above PCIe 4.0 and it could also be a source of stability issues.

    > But where it *will* be an improvement is in peripherals
    > (the x4 channel to the chipset just got twice as fast)

    This is actually the one place where it makes sense to me. The chipset can be located next to the CPU, so that hopefully no retimers will be needed. And the additional power needed to run a short x4 link @ 5.0 speeds hopefully shouldn't be too bad. When leaks first emerged about Alder Lake having PCIe 5.0, I suspected it was just for the chipset link.

    > support for pcie5 storage.

    By the time there are any consumer SSDs that exceed PCIe 4.0 x4 speeds, we'll already be on a new platform. It took over a year for PCIe 4.0 SSDs to finally surpass PCIe 3.0 x4 speeds, and many still don't.

    > easier support for high speed interconnects like USB 3-4

    The highest-rated speed for USB4 is PCIe 3.0 x4. However, even a chipset link of PCIe 4.0 x4 will mean you can support it with bandwidth to spare. That said, I think the highest-speed USB links are typically CPU-direct, in recent generations.

    > 10gb ethernet.

    You can already do that with a PCIe 4.0 x1 link.
  • Spunjji - Monday, August 23, 2021 - link

    > But where it *will* be an improvement is in peripherals
    > (the x4 channel to the chipset just got twice as fast)

    It doesn't use PCIe 5.0 for the chipset link, so it doesn't even have that advantage. I genuinely think it's premature. I guess we'll have to see what motherboard costs look like to know whether it was worth it for future-proofing, or whether it's just spec wankery.
  • mode_13h - Tuesday, August 24, 2021 - link

    > It doesn't use PCIe 5.0 for the chipset link

    I'm pretty sure they didn't specify that, one way or another. I'm pessimistic, though. Then again, didn't Rocket Lake have a PCIe 4.0 x8 link to the chipset? If so, moving up to PCIe 5.0 x4 is plausible.

    > or whether it's just spec wankery.

    It's definitely wankery. I'm just waiting for them either to walk it back, a la Comet Lake's PCIe 4.0 support, or for users to encounter a raft of issues, once some PCIe 5.0 GPU is finally released and people try to actually *use* the capability.
  • Spunjji - Friday, August 27, 2021 - link

    All the resources I'm finding online say it's a DMI 4.0 x8 link to the chipset, so the same as Rocket Lake. Personally I think that's going to be plenty for the vast majority of their users, assuming they follow up at some point in the not-too distant future with an up-to-date HEDT platform for the users who need more.
  • mode_13h - Saturday, August 28, 2021 - link

    That's a shame, because the DMI link is the one place where Intel could've gotten practical benefits from using PCIe 5.0, right away.

Log in

Don't have an account? Sign up now