Memory Subsystem & Latency: Quite Different

The memory subsystem comparisons for the Snapdragon 888 and Exynos 2100 are very interesting for a few couple of reasons. First of all – these new SoCs are the first to use new higher-frequency LPDDR5-6400 memory, which is 16% faster than that of last year’s LPDRR5-5500 DRAM used in flagship devices.

On the Snapdragon 888 side of things, Qualcomm this generation has said that they have made significant progress in improving memory latency – a point of contention that’s generally been a weak point of the previous few generations, although they always did keep improving things gen-on-gen.

On the Exynos 2100 side, Samsung’s abandonment of their custom cores also means that the SoC is now very different to the Exynos 990. The M5 used to have a fast-path connection between the cores and the memory controllers – exactly how Samsung reimplemented this in the Exynos 2100 will be interesting.

Starting things off with the new Snapdragon 888, we are seeing some very significant changes compared to the Snapdragon 865 last year. Full random memory latency went down from 138ns to 114ns, which is a massive generation gain given that Arm always quotes that 4ns of latency equals 1% of performance.

Samsung’s Exynos 2100 on the other hand doesn’t look as good: At around 136ns at 128MB test depth, this is quite worse than the Snapdragon 888, and actually a regression compared to the Exynos 990 at 131ns.

Looking closer at the cache hierarchies, we’re seeing 64KB of L1 caches for both X1 designs – as expected.

What’s really weird though is the memory patterns of the X1 and A78 cores as they transition from the L2 caches to the L3 caches. Usually, you’d expect a larger latency hump into the 10’s of nanoseconds, however on both the Cortex-X1 and Cortex-A78 on both the Snapdragon and Exynos we’re seeing L3 latencies between 4-6ns which is far faster than any previous generation L3 and DSU design we’ve seen from Arm.

After experimenting a bit with my patterns, the answer to this weird behaviour is quite amazing: Arm is prefetching all these patterns, including the “full random” memory access pattern. My tests here consist of pointer-chasing loops across a given depth of memory, with the pointer-loop being closed and always repeated. Arm seems to have a new temporal prefetcher that recognizes arbitrary memory patterns and will latch onto them and prefetch them in further iterations.

I re-added an alternative full random access pattern test (“Full random RT”) into the graph as alternative data-points. This variant instead of being pointer-chase based, will compute a random target address at runtime before accessing it, meaning it’ll be always a different access pattern on repeated loops of a given memory depth. The curves here aren’t as nice and they aren’t as tight as the pointer-chase variant because it currently doesn’t guarantee that it’ll visit every cache line at a given depth and it also doesn’t guarantee not revisiting a cache line within a test depth loop, which is why some of the latencies are lower than that of the “Full random” pattern – just ignore these parts.

This alternative patterns also more clearly reveals the 512KB versus 1MB L2 cache differences between the Exynos’ X1 core and the Snapdragon X1 core. Both chips have 4MB of L3, which is pretty straightforward to identify.

What’s odd about the Exynos is the linear access latencies. Unlike the Snapdragon whose latency grows at 4MB and remains relatively the same, the Exynos sees a second latency hump around the 10MB depth mark. It’s hard to see here in the other patterns, but it’s also actually present there.

This post-4MB L3 cache hierarchy is actually easier to identify from the perspective of the Cortex-A55 cores. We see a very different pattern between the Exynos 2100 and the Snapdragon 888 here, and again confirms that there’s lowered latencies up until around 10MB depth.

During the announcement of the Exynos 2100, Samsung had mentioned they had improved and included “better cache memory”, which in context of these results seems to be pointing out that they’ve now increased their system level cache from 2MB to 6MB. I’m not 100% sure if it’s 6 or 8MB, but 6 seems to be a safe bet for now.

In these A55 graphs, we also see that Samsung continues to use 64KB L2 caches, while Qualcomm makes use of 128KB implementations. Furthermore, it looks like the Exynos 2100 makes available to the A55 cores the full speed of the memory controllers, while the Snapdragon 888 puts a hard limit on them, and hence the very bad memory latency, similarly to how Apple does the same in their SoCs when just the small cores are active.

Qualcomm seems to have completely removed access of the CPU cluster to the SoC’s system cache, as even the Cortex-A55 cores don’t look to have access to it. This might explain why the CPU memory latency this generation has been greatly improved – as after all, memory traffic had to do one whole hop less this generation. This also in theory would put less pressure on the SLC, and allow the GPU and other blocks to more effectively use its 3MB size.

5nm / 5LPE: What Do We Know? SPEC - Single Threaded Performance & Power
Comments Locked

123 Comments

View All Comments

  • eastcoast_pete - Monday, February 8, 2021 - link

    Andrei, also special thanks for the power draw comparison of the A55 Little Cores in the (TSMC N7) 865 vs the (Samsung 5 nm) 888! That one graph tells us everything we need to know about what Samsung's current " 5 nm" is really comparable to. I really wonder if QC's decision to chose Samsung's fabbing was more based on availability (or absence thereof for TSMC's 5 nm) or on price?
  • DanD85 - Monday, February 8, 2021 - link

    Well, seems like Apple hogging most of TSMC 5nm node leaves other with no other choice but going with the lesser foundry.
  • heraldo25 - Monday, February 8, 2021 - link

    For such a thorough review it is shocking to see that software versions (build number) used during tests are not stated.
    It is absolutely essential that the review contains software versions, so that other can try to replicate results, and for the reviewing site, to have references during re-tests.
  • name99 - Monday, February 8, 2021 - link

    The milc win is certainly from the data prefetcher. In simulation milc also benefits massively from runahead execution, ie same principle (bring in data earlier).

    Has anyone identified a paper or patent that indicates what ARM are doing? A table driven approach (markov prefetcher) still seems impractical, and ARM don't go in for blunt solutions that just throw area at the problem. They might be doing something like scanning lines as they enter L2 for what look like plausible addresses, and prefetching based on those, which would cover a large range of pointer-based use cases, and seems like the sort of smart low area solution they tend to favor.
  • trivik12 - Monday, February 8, 2021 - link

    Hope Qualcomm moves next gen flagship SOC to TSMC again. Cannot be at so much disadvantage. Of course Samsung 3nm could narrow the gap, but that is more for 2023 flagships.
    Disappointing to see Exynos disappoint again. How is Exynos1080 as a mid range chipset?
  • geoxile - Monday, February 8, 2021 - link

    Their 3nm is expected to be on par with TSMC N5. The expect gains over 7nm are only 30% higher performance, 35% die area reduction, and 40-50% power reduction. Considering 5LPE is still behind N7P it's not much and will be barely be on par with N5 in density let alone efficiency.
  • jeremyshaw - Monday, February 8, 2021 - link

    In other words, Samsung strangled then killed SARC for their failures, only to find the failures were with SSI itself.
  • geoxile - Monday, February 8, 2021 - link

    You must be kidding... The Exynos 2100 is at least somewhat close to the Snapdragon 888 in CPU performance. Mali continues to be a problem, and remains so even for the Kirin 9000 on TSMC N5. Mongoose was an abomination that belonged maybe in 2015. Samsung Semiconductor is less competent than TSMC but SARC's mongoose team was a joke.
  • EthiaW - Monday, February 8, 2021 - link

    All those attempts to spend transistors niggardly and boost performance by high frequency have failed miserably.
    Single transistor performance seems to be decaying from node to node now. Flat & Not more enough transistor count=performance regression.
  • eastcoast_pete - Monday, February 8, 2021 - link

    Andrei, when you're testing the actual phone, could you check the battery life with the 5G modem on and off, respectively? 5G modems are supposedly quite power hungry also, and, if it's possible to turn 5G off (but leaving 4G LTE on), it would be interesting to see just how much power 5G really consumes.

Log in

Don't have an account? Sign up now