An Update on Apple’s A7: It's Better Than I Thought

When I reviewed the iPhone 5s I didn’t have much time to go in and do the sort of in-depth investigation into Cyclone (Apple’s 64-bit custom ARMv8 core) as I did with Swift (Apple’s custom ARMv7 core from A6) the year before. I had heard rumors that Cyclone was substantially wider than its predecessor but I didn’t really have any proof other than hearsay so I left it out of the article. Instead I surmised in the 5s review that the A7 was likely an evolved Swift core rather than a brand new design, after all - what sense would it make to design a new CPU core and then do it all over again for the next one? It turns out I was quite wrong.

Armed with a bit of custom code and a bunch of low level tests I think I have a far better idea of what Apple’s A7 and Cyclone cores look like now than I did a month ago. I’m still toying with the idea of doing a much deeper investigation into A7, but I wanted to share some of my findings here.

The first task is to understand the width of the machine. With Swift I got lucky in that Apple had left a bunch of public LLVM documentation uncensored, referring to Swift’s 3-wide design. It turns out that although the design might be capable of decoding, issuing and retiring up to three instructions per clock, in most cases it behaved like a 2-wide machine. Mix FP and integer code and you’re looking at a machine that’s more like 1.5 instructions wide. Obviously Swift did very well in the market and its competitors at the time, including Qualcomm’s Krait 300, were similarly capable.

With Cyclone Apple is in a completely different league. As far as I can tell, peak issue width of Cyclone is 6 instructions. That’s at least 2x the width of Swift and Krait, and at best more than 3x the width depending on instruction mix. Limitations on co-issuing FP and integer math have also been lifted as you can run up to four integer adds and two FP adds in parallel. You can also perform up to two loads or stores per clock.

I don’t yet have a good understanding of the number of execution ports and how they’re mapped, but Cyclone appears to be the widest ARM architecture we’ve ever seen at this point. I’m talking wider than Qualcomm’s Krait 400 and even ARM’s Cortex A15.

I did have some low level analysis in the 5s review, where I pointed out the significantly reduced memory latency and increased bandwidth to the A7. It turns out that I was missing a big part of the story back then as well…

A Large System Wide Cache

In our iPhone 5s review I pointed out that the A7 now featured more computational GPU power than the 4th generation iPad. For a device running at 1/8 the resolution of the iPad, the A7’s GPU either meant that Apple had an application that needed tons of GPU performance or it planned on using the A7 in other, higher resolution devices. I speculated it would be the latter, and it turns out that’s indeed the case. For the first time since the iPad 2, Apple once again shares common silicon between the iPhone 5s, iPad Air and iPad mini with Retina Display.

As Brian found out in his investigation after the iPad event last week all three devices use the exact same silicon with the exact same internal model number: S5L8960X. There are no extra cores, no change in GPU configuration and the biggest one: no increase in memory bandwidth.

Previously both the A5X and A6X featured a 128-bit wide memory interface, with half of it seemingly reserved for GPU use exclusively. The non-X parts by comparison only had a 64-bit wide memory interface. The assumption was that a move to such a high resolution display demanded a substantial increase in memory bandwidth. With the A7, Apple takes a step back in memory interface width - so is it enough to hamper the performance of the iPad Air with its 2048 x 1536 display?

The numbers alone tell us the answer is no. In all available graphics benchmarks the iPad Air delivers better performance at its native resolution than the outgoing 4th generation iPad (as you'll soon see). Now many of these benchmarks are bound more by GPU compute rather than memory bandwidth, a side effect of the relative lack of memory bandwidth on modern day mobile platforms. Across the board though I couldn’t find a situation where anything was smoother on the iPad 4 than the iPad Air.

There’s another part of this story. Something I missed in my original A7 analysis. When Chipworks posted a shot of the A7 die many of you correctly identified what appeared to be a 4MB SRAM on the die itself. It's highlighted on the right in the floorplan diagram below:


A7 Floorplan, Courtesy Chipworks

While I originally assumed that this SRAM might be reserved for use by the ISP, it turns out that it can do a lot more than that. If we look at memory latency (from the perspective of a single CPU core) vs. transfer size on A7 we notice a very interesting phenomenon between 1MB and 4MB:

That SRAM is indeed some sort of a cache before you get to main memory. It’s not the fastest thing in the world, but it’s appreciably quicker than going all the way out to main memory. Available bandwidth is also pretty good:

We’re only looking at bandwidth seen by a single CPU core, but even then we’re talking about 10GB/s. Lookups in this third level cache don’t happen in parallel with main memory requests, so the impact on worst case memory latency is additive unfortunately (a tradeoff of speed vs. power).

I don’t yet have the tools needed to measure the impact of this on-die memory on GPU accesses, but in the worst case scenario it’ll help free up more of the memory interface for use by the GPU. It’s more likely that some graphics requests are cached here as well, with intelligent allocation of bandwidth depending on what type of application you’re running.

That’s the other aspect of what makes A7 so very interesting. This is the first Apple SoC that’s able to deliver good amounts of memory bandwidth to all consumers. A single CPU core can use up 8GB/s of bandwidth. I’m still vetting other SoCs, but so far I haven’t come across anyone in the ARM camp that can compete with what Apple has built here. Only Intel is competitive.

 

Introduction, Hardware & Cases CPU Changes, Performance & Power Consumption
Comments Locked

444 Comments

View All Comments

  • lilo777 - Wednesday, October 30, 2013 - link

    Or maybe they realized that product longevity is more important to them especially when other tech pioneers can provide both.
  • tipoo - Wednesday, October 30, 2013 - link

    The 5S throttles to 75% after just two minutes of load? I'm not sure why that would be considered an ok thing. The Nexus 4 was criticized a lot for throttling.
  • Zoolookuk - Wednesday, October 30, 2013 - link

    Battery life... the initial 2 mins is just like turbo boost. It's not permanent.
  • Justin216 - Wednesday, October 30, 2013 - link

    The Nexus 4 was throttling under normal or typical use -- the throttling shown in this review is under extreme situations that wouldn't normally or reasonably occur.
  • Spunjji - Wednesday, October 30, 2013 - link

    Incorrect, it throttles under benches too. Never seen my partner's Nexus 4 show throttling behaviour under normal use. That said, we do live in England...
  • stacey94 - Wednesday, October 30, 2013 - link

    Well you wouldn't notice it unless he was playing a game. With mpdecision the Nexus 4 runs at 1026 MHz or slightly higher after touch input anyway, and that's also the loading throttling frequency.
  • darkich - Wednesday, October 30, 2013 - link

    Soo..we just saw the A7 stomping all over the most powerful Bay Trail.
    Now will you FINALY drop your Intel bias, Anand?
  • VengenceIsMineX - Wednesday, October 30, 2013 - link

    ASUS T100 is lower Bay Trail SKU, not the higher speed one and ASUS really cut things like memory quality and storage to the bone to hit that price point. Anand hasn't done a review of a top tier Bay Trail product like the upcoming Dell Venue Pro 11 yet.
  • VengenceIsMineX - Wednesday, October 30, 2013 - link

    Also, the benchmarks Anand chose for this are almost all javascript based, not compiled code which is where traditionally Intel has performed better vis a vis ARM. It's still to early to call this fight yet until we see the higher end Bay Trail sku's in action and better benchmarks, the ones Anand chose to perform would be expected to tilt toward the A7 but compiled code benchmarks like Geekbench will likely slant toward Intel.
  • Braumin - Wednesday, October 30, 2013 - link

    To be fair to Anand, it's just tough to compare anything cross-platform. Unfortunately browser tests are one of the few cross-platform things you can even run, even if it's ultimately a test of the browser more than anything.

    Even if you had native code it would be different on every platform and therefore not an equal test.

    I look at the JS benchmarks and find them interesting more in how far everyone has come rather than actually compare them across platforms. The only thing Kraken tells you is how that particular OS/chip runs Kraken. For sure it's fair to compare, say Surface RT and Surface 2 on Kraken, but that's about where it ends.

Log in

Don't have an account? Sign up now