When Apple announced the iPhone 5, Phil Schiller officially announced what had leaked several days earlier: the phone is powered by Apple's new A6 SoC.

As always, Apple didn't announce clock speeds, CPU microarchitecture, memory bandwidth or GPU details. It did however give us an indication of expected CPU performance:
 
 
Prior to the announcement we speculated the iPhone 5's SoC would simply be a higher clocked version of the 32nm A5r2 used in the iPad 2,4. After all, Apple seems to like saving major architecture shifts for the iPad. 
 
However, just prior to the announcement I received some information pointing to a move away from the ARM Cortex A9 used in the A5. Given Apple's reliance on fully licensed ARM cores in the past, the expected performance gains and unpublishable information that started all of this I concluded Apple's A6 SoC likely featured two ARM Cortex A15 cores. 
 
It turns out I was wrong. But pleasantly surprised.
 
The A6 is the first Apple SoC to use its own ARMv7 based processor design. The CPU core(s) aren't based on a vanilla A9 or A15 design from ARM IP, but instead are something of Apple's own creation.
 

Hints in Xcode 4.5

 
The iPhone 5 will ship with and only run iOS 6.0. To coincide with the launch of iOS 6.0, Apple has seeded developers with a newer version of its development tools. Xcode 4.5 makes two major changes: it drops support for the ARMv6 ISA (used by the ARM11 core in the iPhone 2G and iPhone 3G), keeps support for ARMv7 (used by modern ARM cores) and it adds support for a new architecture target designed to support the new A6 SoC: armv7s.
 

 
What's the main difference between the armv7 and armv7s architecture targets for the LLVM C compiler? The presence of VFPv4 support. The armv7s target supports it, the v7 target doesn't. Why does this matter?
 
Only the Cortex A5, A7 and A15 support the VFPv4 extensions to the ARMv7-A ISA. The Cortex A8 and A9 top out at VFPv3. If you want to get really specific, the Cortex A5 and A7 implement a 16 register VFPv4 FPU, while the A15 features a 32 register implementation. The point is, if your architecture supports VFPv4 then it isn't a Cortex A8 or A9.
 
It's pretty easy to dismiss the A5 and A7 as neither of those architectures is significantly faster than the Cortex A9 used in Apple's A5. The obvious conclusion then is Apple implemented a pair of A15s in its A6 SoC.
 
For unpublishable reasons, I knew the A6 SoC wasn't based on ARM's Cortex A9, but I immediately assumed that the only other option was the Cortex A15. I foolishly cast aside the other major possibility: an Apple developed ARMv7 processor core.
 

Balancing Battery Life and Performance

 
There are two types of ARM licensees: those who license a specific processor core (e.g. Cortex A8, A9, A15), and those who license an ARM instruction set architecture for custom implementation (e.g. ARMv7 ISA). For a long time it's been known that Apple has both types of licenses. Qualcomm is in a similar situation; it licenses individual ARM cores for use in some SoCs (e.g. the MSM8x25/Snapdragon S4 Play uses ARM Cortex A5s) as well as licenses the ARM instruction set for use by its own processors (e.g. Scorpion/Krait implement in the ARMv7 ISA).
 
For a while now I'd heard that Apple was working on its own ARM based CPU core, but last I heard Apple was having issues making it work. I assumed that it was too early for Apple's own design to be ready. It turns out that it's not. Based on a lot of digging over the past couple of days, and conversations with the right people, I've confirmed that Apple's A6 SoC is based on Apple's own ARM based CPU core and not the Cortex A15.
 
Implementing VFPv4 tells us that this isn't simply another Cortex A9 design targeted at higher clocks. If I had to guess, I would assume Apple did something similar to Qualcomm this generation: go wider without going substantially deeper. Remember Qualcomm moved from a dual-issue mostly in-order architecture to a three-wide out-of-order machine with Krait. ARM went from two-wide OoO to three-wide OoO but in the process also heavily pursued clock speed by dramatically increasing the depth of the machine.
 
The deeper machine plus much wider front end and execution engines drives both power and performance up. Rumor has it that the original design goal for ARM's Cortex A15 was servers, and it's only through big.LITTLE (or other clever techniques) that the A15 would be suitable for smartphones. Given Apple's intense focus on power consumption, skipping the A15 would make sense but performance still had to improve.

Why not just run the Cortex A9 cores from Apple's A5 at higher frequencies? It's tempting, after all that's what many others have done in the space, but sub-optimal from a design perspective. As we learned during the Pentium 4 days, simply relying on frequency scaling to deliver generational performance improvements results in reduced power efficiency over the long run. 
 
To push frequency you have to push voltage, which has an exponential impact on power consumption. Running your cores as close as possible to their minimum voltage is ideal for battery life. The right approach to scaling CPU performance is a combination of increasing architectural efficiency (instructions executed per clock goes up), multithreading and conservative frequency scaling. Remember that in 2005 Intel hit 3.73GHz with the Pentium Extreme Edition. Seven years later Intel's fastest client CPU only runs at 3.5GHz (3.9GHz with turbo) but has four times the cores and up to 3x the single threaded performance. Architecture, not just frequency, must improve over time.
 
At its keynote, Apple promised longer battery life and 2x better CPU performance. It's clear that the A6 moved to 32nm but it's impossible to extract 2x better performance from the same CPU architecture while improving battery life over only a single process node shrink.
 
Despite all of this, had it not been for some external confirmation, I would've probably settled on a pair of higher clocked A9s as the likely option for the A6. In fact, higher clocked A9s was what we originally claimed would be in the iPhone 5 in our NFC post.
 
I should probably give Apple's CPU team more credit in the future.
 
The bad news is I have no details on the design of Apple's custom core. Despite Apple's willingness to spend on die area, I believe an A15/Krait class CPU core is a likely target. Slightly wider front end, more execution resources, more flexible OoO execution engine, deeper buffers, bigger windows, etc... Support for VFPv4 guarantees a bigger core size than the Cortex A9, it only makes sense that Apple would push the envelope everywhere else as well. I'm particularly interested in frequency targets and whether there's any clever dynamic clock work happening. Someone needs to run Geekbench on an iPhone 5 pronto.
 
I also have no indication how many cores there are. I am assuming two but Apple was careful not to report core count (as it has in the past). We'll get more details as we get our hands on devices in a week. I'm really interested to see what happens once Chipworks and UBM go to town on the A6.
The A6 GPU: PowerVR SGX 543MP3?
Comments Locked

163 Comments

View All Comments

  • zanon - Saturday, September 15, 2012 - link

    Thanks for the write up Anand, this is an interesting step forward and I very much look forward to seeing what they've put together. We've known for years that Apple has been acquiring significant chip design talent (P. A. Semi being a major example), but I think this will be the first time we'll get to see it really put to use at the lowest and most core levels, rather then merely SoC integration or peripheral stuff.

    It's been many years since we last saw a wide array of companies trying to make different CPUs. It'll be very interesting to see what all comes of it.
  • chromatix - Saturday, September 15, 2012 - link

    I believe they leave gaps in the numbering scheme to allow for new types of cores in lower performance and power consumption brackets.

    The Cortex-A8 was the first ARMv7-A design. It was followed considerably later by the A5 and A9, offering lower and higher performance (and power consumption) respectively. The A7 and A15 are the latest pair in the same vein.

    There are also Cortex-R and Cortex-M series CPUs, following the ARMv7-R and ARMv7-M architectures respectively (except that the Cortex-M0, the very smallest ARM core, follows ARMv6-T2 instead). These designs are for Realtime and eMbedded designs respectively and have appropriate design tradeoffs accordingly.

    None of these are to be confused with the ARM7 core, which dates back to the mid-1990s (using ARMv4T architecture) and is still insanely popular because it uses only a few ten-thousands of transistors. It has been regularly updated to work with newer processes, so these days it is a complete CPU core in a tiny fraction of a square millimetre, and runs at several hundred MHz. All together now: "Imagine a Beowulf cluster of those!"
  • KPOM - Saturday, September 15, 2012 - link

    After you are done wiping the egg off your face from your proud tweets earlier this week that it was an A15, I'm assuming you'll be running some tests once you get your hands on an iPhone 5. Is that a good assumption?

    It will be interesting to see how well this compares to the A15, and what competitors will put into their phones over the coming months.
  • DigitalFreak - Saturday, September 15, 2012 - link

    Dick
  • Sufo - Sunday, September 16, 2012 - link

    Oh my, what buffoons. Factually inaccurate tweets? However will they live it down? *snicker*
  • ltcommanderdata - Saturday, September 15, 2012 - link

    The SGX554MP2 is also a possibility. It offers 2x the ALU performance of the SGX543MP2, equivalent to the SGX543MP4 without doubling the TMU or ROP count which isn't as necessary given the resolution difference between the 2012 iPad and iPhone 5. Apple may not want to introduce a new GPU core when Rogue is around the corner though.

    Is it too early for 2x32-bit LPDDR3? Sticking with LPDDR2 they could only move from LPDDR2-800 in the A5 to LPDDR2-1066, which is a pretty marginal difference in bandwidth to feed a 2x faster CPU and particularly a 2x faster GPU. If they could get LPDDR3-1600, they could match A5X memory bandwidth with half the memory controllers.

    Any speculation on cache sizes? Shipping Cortex A9 designs seem to have stuck with 512KB per core, which was unchanged from higher-end Cortex A8 designs. Since Apple isn't worried about die space, would moving to 1MB L2 cache per core be worthwhile? Intel pushed a large, low latency L2 cache in Dothan as a power efficient way to increase performance so there is merit to that approach. Would Apple consider a shared L3 cache like Sandy Bridge/Ivy Bridge to share data between the CPU and GPU?
  • Alexvrb - Sunday, September 16, 2012 - link

    I have to admit, I was sure it was an MP4 like the latest iPad... but I hadn't considered the possibility of an MP3 (at slightly higher clocks than their A5 used) or an SGX554. I hadn't seen or heard of 554 being used, so I sort of forgot the design was there. Waiting.

    Won't be long until Series 6 though, which will produce some pretty amazing mobile graphics. :D
  • erple2 - Sunday, September 16, 2012 - link

    Doesn't the Archos 101sx use the omap 4470, and the sgx 554? I seem to remember reading that in the review on this very site (page 3, middle of second paragraph).
  • Alexvrb - Sunday, September 16, 2012 - link

    I'm pretty sure that the 4470 has an SGX544. Looking around it looks like there were rumors of it being an SGX554 before release, but those same rumors said the CPU cores would be clocked at 1.8Ghz. So... yeah.
  • dagamer34 - Saturday, September 15, 2012 - link

    This makes a lot more sense than Apple having A15 CPUs (I always found it a little weird that Apple would be ready with them before TI when they were it's lead development partner, and we haven't seen silicon at final clocks yet).

    Invariably, if we've got a custom CPU core, then sooner or later we're going to get a custom GPU core and the Apple SoC is going to be a black box where the only data we get about it is from benchmarks.

    I'm glad the mistake has been cleared up and eager to see how it performs against Krait and Cortex A15.

Log in

Don't have an account? Sign up now