Golden Cove Microarchitecture (P-Core) Examined

Section by Andrei Frumusanu

Within Alder Lake and Sapphire Rapids, we find Intel’s brand-new performance core (P-Core) microarchitecture, called Golden Cove. The new microarchitecture and core design follows the Willow Cove core in Tiger Lake, the Sunny Cove core in Ice Lake, and the derivative Cypress Cove core in Rocket Lake.

Although the new design is a follow-up over the last few years of core microarchitectures, Intel sees the Golden Gove as a major step-function update, with massive revamps of the fundamental building blocks of the CPU, going as far as calling it as allowing performance for the next decade of compute, meaning Intel is seemingly planning on using it as the basis for the evolution of cores for at least a few years.

The last time we had seen major upgrades to Intel’s core front-end was in Ice Lake's Sunny Cove, making notable increases in the microarchitecture’s out-of-order execution capabilities and depth, while Willow Cove and Cypress Cove were more iterative designs focusing on the memory subsystem.

Golden Cove, in comparison, makes gigantic changes to the microarchitecture’s front-end – in fact, Intel states that this is the largest microarchitectural upgrade in the core family in a decade, mirroring similar changes in magnitude to what Skylake had done many years ago.

Starting off with the directly most obvious change: Intel is moving from being a 4-wide decode machine to being a 6-wide microarchitecture, a first amongst x86 designs, and a major design focus point. Over the last few years there had been a discussion point about decoder widths and the nature of x86’s variable length instruction set, making it difficult to design decoders that would go wider, compared to say a fixed instruction set ISA like Arm’s, where adding decoders is relatively easier to do. Notably last year AMD’s Mike Clarke had noted while it’s not a fundamental limitation, going for decoders larger than 4 instructions can create practical drawbacks, as the added complexity, and most importantly, added pipeline stages. For Golden Cove, Intel has decided to push forward with these changes, and a compromise that had to be made is that the design now adds an additional stage to the mispredict penalty of the microarchitecture, so the best-case would go up from 16 cycles to 17 cycles. We asked if there was still a kind of special-case decoder layout as in previous generations (such as the 1 complex + 3 simple decoder setup), however the company wouldn’t dwell deeper into the details at this point in time. To feed the decoder, the fetch bandwidth going into it has been doubled from 16 bytes per cycle to 32 bytes per cycle.

Intel states that the decoder is clock-gated 80% of the time, instead relying on the µOP cache. This has also seen extremely large changes this generation: first of all, the structure has now almost doubled from 2.25K entries to 4K entries, mimicking a similar large increase we had seen with the move from AMD’s Zen to Zen2, increasing the hit-rate and further avoiding going the route of the more costly decoders.

Although the I-cache has remained the same in terms of size at 32KB, Intel has improved latencies for larger code coverage by increasing the L1 iTLB from 128 entries to 256 entries (16 -> 32 for 2M/4M hugepages).

Because the general misprediction penalty has gone up this generation, Intel is very much counter-acting this by massively improving branch prediction accuracy. First of all, the L2 BTB (branch target buffer) has grown to well over double with the structure increased from 5K entries to 12K entries, making this by far the biggest in the industry that we’re aware of (Zen3: 6.5K, Cortex-A77+: 8K, Apple: ?). It is also said that the branch predictors themselves are now smarter, with their accuracy being improved, although these are always quite a bit secretive in terms of their designs so we didn’t have too many details divulged.

The decoders and the µOP cache feed a µOP queue – beyond he 50% larger decoder width, the µOP cache also now sees an increased bandwidth and able to emit 8 µOPs instead of 6. The µOP queue itself has seen a structural change, for single threads (only one SMT logical core is loaded), the structure can be aggregated and the usable depth has doubled from 70 entries to 144 entries, while for dual threads, has gone up slightly from 70 to 72 entries per thread.

In the mid-core, there’s also been quite large changes. The core here increases from 5-wide to 6-wide, meaning that in theory it should still be a maximum 6 IPC microarchitecture, though we’re talking about µOPs here and not actual instructions.

Intel is noting that they’re doing an increased amount of dependency resolution at the allocation stage, actually eliminating instructions that otherwise would have to actually emitted to the back-end execution resources. Intel here didn’t want to elaborate a lot, only stating that this capability has increased quite a lot generation over generation.

The out-of-order instruction window of Golden Cove is massively larger than Sunny Cove – increasing the re-order buffer (ROB) size from 352 to 512 entries. This is over double that of AMD’s Zen3 µarch, and really only second to Apple’s core microarchitecture which we’ve measured in at around 630 instructions. Traditionally, we’ve always been told that increasing the ROB has diminishing effects and always corelates with quite an increase in power consumption – but on the other hand we saw Apple achieve extremely high OOO-windows. It’s not clear how exactly Intel managed to increase their µarch by such a large degree in a single jump, but it’s a very impressive and admittedly unexpected design characteristic of the Golden Cove cores.

Moving onto the back-end, Intel has increased their execution width from 10 ports to 12 ports. Intel still continues to use merged execution port / reservation station design that characterises their cores in contrast to other µarchs from the competition, or even their own E-core microarchitectures, which use separate integer and FP/vector ports and pipelines.

On the integer side of things, there’s now a fifth execution port and pipeline with simple ALU and LEA capabilities, which in theory makes this the widest x86 core in raw ALU throughput, although in mixed code the 5 pipelines are still somewhat comparatively narrower.

On the FP capabilities side of things, we’ve seen added dedicated FADD capabilities, which are far more efficient and lower latency than using the FMA units (when applicable). The FMA units now have also seen added support for FP16 data types, although this falls under the AVX512 umbrella consumer cores won’t see the capabilities.

In general, besides the added 5th ALU port/pipeline, the back-end execution resources of Golden Cove appear to be somewhat more conservative compared to the more substantial front-end changes of the core.

On the load & store front, we see the addition of an additional dedicated execution port with a load AGU, increasing the possible number of loads per cycle from 2 to 3, matching the upgrades we’ve seen from many other microarchitectures over the last few years.

For AVX512-enabled implementations of the µarch, it’s able to load up to 1Kb/cycle from the L1, while consumer cores still see a 50% improvement for AVX2 256-bit loads with 3 per cycle.

Intel, like many other, have noted that modern workloads have increased memory level parallelism, so they have increased the outstanding memory transactions from the core by increasing the L1D fill buffers from 12 to 16, and increasing the load and store buffers by undisclosed amounts. Similarly, to AMD, to better address sparse memory access workloads, the µarch has seen a doubling of page walkers from 2 to 4, able to look up pages in TLB misses.

The L1 DTLB has grown from 64 entries to 96 entries – Intel didn’t mention the L2 TLB which would mean it’s still at 2048 entries.

Intel has improved the prefetchers, nothing things such as “better stride prefetching in L1”, though beyond that the company hasn’t divulged much other details. We’ve seen a lot of advances in the industry in terms of prefetchers so it’ll be interesting to see how Golden Cove compared to the competition here.

The L2 cache in Golden Cove remains at 1.25MB for consumer core variants, while the server variant grows it to 2MB compared to 1.25MB in the current Ice Lake-SP chips. Parallel data access outstanding requests for the memory subsystem has grown from 32 misses to 48, also further helping with increasing MLP capabilities of the core.

Intel also again notes there’s been prefetching improvements in the L2. What the company calls “full-line-write predictive bandwidth optimisation” is likely the same mechanism we saw in our review of Ice Lake-SP where the core can greatly improve bandwidth by avoiding RFO reads of cache lines that are going to be fully rewritten, which should greatly improve common generic operations such as memory copies.

The aggregated changes of the new Golden Cove microarchitecture amount to a geomean IPC increase of 19% compared to Cypress Cove (Rocket Lake) - measured over a set of workloads including SPEC CPU 2017, SYSmark 25, Crossmark, PCMark 10, WebXPRT3, and Geekbench 5.4.1. We can see in the graph that there’s outlier workloads with up to +60% IPC, but also low outliers where the new design doesn’t improve thing much or even sees regressions, which is odd.

The front-end changes of Golden Cove indeed look like a major step-function: 50% wider decode, +25% wider µOP output and double the capacity, +45% deeper out-of-order execution buffer, massive branch prediction improvements. It’s possible that the more conservative back-end execution improvements overall lowered the end results.

19% IPC is still a very impressive figure, and one of Intel’s largest jumps in a decade, but the company is also comparing it to Cypress Cove, rather than their latest generation design, which is Willow Cove inside of the Tiger Lake chips. We would have wished for Intel to at least give us both comparisons, so that we would get an idea for both desktop as well as mobile improvements.

Intel Thread Director Gracemont Microarchitecture (E-Core) Examined
Comments Locked

223 Comments

View All Comments

  • name99 - Saturday, August 21, 2021 - link

    As always the devil is in the details :-)

    Basic loop buffers, as in the LSD (introduced one gen before Nehalem, with Core2) have been with us forever, including early ARM chips and the early PA Semi chips, going on to Apple Swift.

    But the basic loop buffer can not deal with branches (because part of the system is to switch off branch prediction!). Part of what makes the Apple scheme interesting and exceptional is that it's this graduated scheme that manages to extract much of the energy win from the repetition of loops while being able to cover a much wider variety of loops including those with (not too awful) patterns of function calls and branches.

    Comparing details is usually unhelpful because different architectures have different concerns; obviously x86 has decode+variable length concerns which is probably THE prime concern for how their structure their attempts to extract performance and energy savings from loops,

    On the Apple side, I would guess that Mapping (specifically detecting dependencies within a Decode group of 8 instructions, ie what register written by instruction A is immediately read by successor instruction B) is a high-energy task, and a future direction for all these loop techniques on the Apple side might be to somehow save these inter-instruction-dependencies in the loop storage structure? This is, obviously, somewhat different from Intel or AMD's prime concern with their loops, given that even now they max out at only 5 (perhaps soon 6) wide in the mapping stage, and don't need to know as much for mapping because they don't do as much zero-cycle work in the stage right after Mapping.
  • GeoffreyA - Sunday, August 22, 2021 - link

    Thanks. I suppose storing the dependancy information would be of use even in non-loop cases, because of the amount of work it takes. Then again, it might add greater complexity, which is always a drawback.
  • mode_13h - Sunday, August 22, 2021 - link

    > I would guess that Mapping (specifically detecting ... what register written by
    > instruction A is immediately read by successor instruction B) is a high-energy task

    So, do you foresee some future ISA trying to map these out at compile-time, like Intel's ill-fated EPIC tried to do? On the one hand, it bloats the instruction size with redundant information. On the other, it would save some expensive lookups, at runtime. I guess you could boil it down to a question of whether it takes more energy for those bits to come in from DRAM and traverse the cache hierarchy vs. doing more work inside the core.

    The other idea I have is that if the CPU stores some supplemental information in its i-cache, then why not actually flush that out to L3 & DRAM, rather than recompute it each time? The OS would obviously have to provide the CPU with a private data area, sort of like a shadow code segment, but at least the ISA wouldn't have to change.
  • mode_13h - Saturday, August 21, 2021 - link

    Thanks. Very nice incremental explanation of a loop buffer, trace cache, and L0.

    > n-way set-associative cache means you now have, n slots associated with a given index.
    > So if you have 8 slots, you can hold 8 lines with that same index,
    > ie 8 addresses with those same middle bits.
    > BUT how do you know WHICH of those 8 lines you want?

    Yeah, I know how a set-associative cache works. The simplistic explanation is that there's a n-entry CAM (Content-Addressable Memory) which holds the upper bits of the addresses (I think what you're calling Tags) for each cache line in a set. So, a cache lookup involves up to n (I suppose 8, in this case) comparisons against those upper bits, to find out if any of them match the requested address. And, ideally, we want the hardware to do all those comparisons in parallel, so it can give a quicker answer where our data is (or fetch it, if the cache doesn't have it).

    Even at that level, cache is something not enough software developers know about. It's really easy to thrash a normal set-associative cache. Just create a 2D array with a width that's a factor or an integral multiple of a cache set size and do a column-traversal. If you're lucky, your entire array fits in L3. If not... :(

    > which of these 8 possible lines is of interest is called a WAY.

    Where I first learned about CPU caches, they called it a set. So, an 8-way set-associative cache had 8 sets.

    > by storing the cache way, you can access a cache with the speed ...
    > and energy ... of a direct-mapped cache.

    Yup. That's what I thought you were saying. So, the way/set + offset is basically an absolute pointer into the cache. And a bonus is that it only needs as many bits as the cache size, rather than a full address. So, a 64k cache would need only 16 bits to uniquely address any content it holds.
  • GeoffreyA - Thursday, August 19, 2021 - link

    I believe the reservation station is that portion which contains the scheduler and physical register files. In Intel, it's been unified since P6, compared to AMD's split/distributed scheduler design and, I think, Netburst.
  • name99 - Thursday, August 19, 2021 - link

    "Intel is noting that they’re doing an increased amount of dependency resolution at the allocation stage, actually eliminating instructions that otherwise would have to actually emitted to the back-end execution resources."

    Again presumably this means "executing" instructions at Rename (or earlier) rather than as actual execution.
    Apple examples are
    - handling some aspects of branches (for unconditional branches) at Decode
    - zero cycle move. This means you treat a move from one register to another by creating a second reference to the underlying physical register. Sounds obvious, the trick is tracking how many references now exist to a given physical register and when it can be freed. It's tricky enough that Apple have gone through three very different schemes so far.
    - zero cycle immediates. The way Apple handle this is a separate pool of ~40 integer registers dedicated to handling MOV xn, # (ie load xn with immediate value #), and the instruction is again handled to Rename.
    Intel could do the same.They already do this for zero idioms, of course.

    - then there are weirder cases like value prediction where again you insert the value into the target register at Rename. The instruction still has to be validated (hence executed in some form) but the early insertion improves. Apple does this for certain patterns of loads, but the details are too complicated for here.
  • mode_13h - Friday, August 20, 2021 - link

    Thanks again!
  • name99 - Thursday, August 19, 2021 - link

    "Someone please remind us what a reservation station is?"

    After an instruction is decoded it passes through Rename where resources it will need later (like a destination register, or a load/store queue entry) are allocated.
    Then it is placed in a Scheduling Queue. It sits in the queue until its dependencies are resolved (ie ADD x0, x1, x2 cannot execute until the values of x1 and x2 are known.
    This Scheduling Queue is also called a Reservation Station.

    There is a huge dichotomy in design here. Intel insists on using a single such queue, everyone else uses multiple queues. Apple have a queue per execution unit, ala https://twitter.com/dougallj/status/13739734787312... (this is not 100% correct, but good enough).
    The problem with a large queue is meeting cycle time. The problem with multiple queues is that they can get unbalanced. It's sad if you are executing integer only code and can't use the FP queue slots. Even worse is if you have one of your integer queues totally full, and the others empty, so only that one queue dispatches work per cycle.
    Apple solve these in a bunch of ways.
    First note the Dispatch Buffer in front of the Queues. This accepts instructions up to 8-wide per cycle from Rename, and sends as many as possible to the Scheduling Queues. It engages in load balancing to make sure that queues are always as close to evenly filled as possible.
    Secondly the most recent Apple designs pair the Scheduling Queues so that, ideally, each queue issues one instruction, but if a Queue cannot find a runnable instruction, it will accept the second choice runnable candidate from its paired queue.

    Queues and scheduling are actually immensely complicated. You have hundreds of instructions, all of which could depend in principle on any earlier instruction, so how do track (at minimal area and energy) all these dependencies? Apple appears to use a Matrix Scheduler with a TRULY ASTONISHINGLY CLEVER dependency scheme. A lot about the M1 impresses me, but if I had to choose one thing it might be this.
    It's way too complicated to describe here, but among the things you need to bear in mind are
    - you want to track which instructions depend on which
    - you want to track the age of instructions (so that when multiple instructions are runnable, earliest go first)
    - you need to handle Replay. I won't describe this here, but for people who know what it is, Apple provide
    + cycle-accurate Replay (no randomly retrying every four cycles till you finally succeed!) AND, most amazingly
    + perfect DEMAND Replay (only the instructions, down a chain of dependencies) that depended on a Replay are in turn Replayed (and, again, at the cycle accurate time)
    + if you think that's not amazing enough, think about what this implies for value prediction, and how much more aggressive you can be if the cost of a mispredict is merely a cycle-accurate on-demand Replay rather than a Flush!!!
  • mode_13h - Friday, August 20, 2021 - link

    Wow, you're on a roll!
  • name99 - Thursday, August 19, 2021 - link

    "> full-line-write predictive bandwidth optimisation ... where the core can greatly improve
    > bandwidth by avoiding RFO reads of cache lines that are going to be fully rewritten"

    Of course this is one of those "about time" optimizations :-)
    Apple (it's SO MUCH EASIER with a decent memory model, so I am sure also ARM) have had this for years of course. But improvements to it include
    - treat all-zero lines as special cases that are tagged in L2/SLC but don't require transferring data on the NoC. Intel had something like this in IceLake that, after some time, they switched off with microcode update.

    - store aggregation is obvious and easy if your memory model allows it. But Apple also engages in load aggregation (up to a cache line width) for uncachable data. I'm not sure what the use cases of this are (what's still uncachable in a modern design? reads rather than DMA from PCIe?) but apparently uncachable loads and stores remain a live issue; Apple is still generating patents about them even now.

    - Apple caches all have a bit per line that indicates whether this line should be treated as streaming vs LRU. Obviously any design that provides non-temporal loads/stores needs something like that, but the Apple scheme also allows you to mark pages (or range registers, which are basically BATs -- yes PPC BATs are back, baby!) as LRU or streaming, then the system will just do the right thing whether that data is accessed by load/stores, prefetch or whatever else.

    BTW, just as an aside, Apple's prefetchers start at the load-store unit, not the L1. Meaning they see the VIRTUAL address stream.) This in turn means they can cross page boundaries (and prefetch TLB entries for those boundary crossings). They're also co-ordinated so that each L1 is puppeting what it want the L2 prefetcher to do for it, rather than having L1 and L2 prefetchers working independently and hoping that it kinda sorta results in what you want. And yes, of course, tracking prefetching efficiency and throttling when appropriate have always been there.

Log in

Don't have an account? Sign up now