Though primarily a software-focused event, Apple’s WWDC keynotes are often stage for an interesting hardware announcement or two as well, and this year Apple did not disappoint. At the company’s biggest Mac-related keynote of the year, Apple unveiled the M2, their second-generation Apple Silicon SoC for the Mac (and iPad) platform. Touting modest performance gains over the original M1 SoC of around 18% for multithreaded CPU workloads and 35% in peak GPU workloads, the M2 is Apple’s first chance to iterate on their Mac SoC to incorporate updated technologies, as well as to refresh their lower-tier laptops in the face of recent updates from their competitors.

With the king of the M1 SoCs, M1 Ultra, not even 3 months behind them, Apple hasn’t wasted any time in preparing their second generation of Apple Silicon SoCs. To that end, the company has prepared what is the first (and undoubtedly not the last) of a new family of SoCs with the Apple Silicon M2. Designed to replace the M1 within Apple’s product lineup, the M2 SoC is being initially rolled out in refreshes of the 13-inch MacBook Pro, as well as the MacBook Air – which is getting a pretty hefty redesign of its own in the process.

The launch of the M2 also gives us our first real glimpse into how Apple is going to handle updates within the Apple Silicon ecosystem. With the iPhone family, Apple has kept to a yearly cadence for A-series SoC updates; conversely, the traditional PC ecosystem is on something closer to a 2-year cadence as of late. M2 seems to split this down the middle, coming about a year and a half after the original M1 – though in terms of architecture it looks closer to a yearly A-series SoC update.

From a high level, there has been a limited number of changes with the M2 – or at least as much as Apple wants to disclose at this time – with the focus being on a few critical areas, versus the bonanza that was the initial M1 SoC. While all of this is preliminary ahead of either further disclosures from Apple or getting hands-on time with the hardware itself, the M2 looks a lot like a derivate of the A15 SoC, similar to how the M1 was derived from A14. As a result, at first glance the M1 to M2 upgrade looks quite similar to the A14 to A15 upgrade.

According to Apple, the new SoC is comprised of roughly 20 billion transistors, which is 4B (25%) more than the original M1 – and 5B more than the A15 SoC. The chip is being made on what Apple terms a “second generation 5nm” process, which we believe is likely TSMC’s N5P line, the same line used for the A15 SoC. N5P offers improved performance characteristics versus N5, but not density improvements. So while Apple doesn’t disclose die sizes, the company’s side-by-side die shots are at least accurate in that M2 is going to be a bigger chip than M1.

Apple Silicon SoCs
SoC M2 M1
CPU 4x High Performance (Avalanche?)
16MB Shared L2

4x High Efficiency (Blizzard?)
4MB Shared L2
4x High Performance (Firestorm)
12MB Shared L2

4x High Efficiency (Icestorm)
4MB Shared L2
GPU "Next Generation"
10-Core
3.6 TFLOPS
8-Core
2.6 TFLOPS
Neural Engine 16-Core
15.8 TOPS
16-Core
11 TOPS
Memory
Controller
LPDDR5-6400
8x 16-bit CH
100GB/sec Total Bandwidth (Unified)
LPDDR4-4266
8x 16 CH
68GB/sec Total Bandwidth (Unified)
Memory Capacity 24GB 16GB
Encode/
Decode
8K
H.264, H.265, ProRes, ProRes RAW
4K
H.264, H.265
USB USB4/Thunderbolt 3
2x Ports
USB4/Thunderbolt 3
2x Ports
Transistors 20 Billion 16 Billion
Mfc. Process "Second Generation 5nm"
TSMC N5P?
TSMC N5

Starting from the top, in terms of their Arm-architecture CPU cores, the M2 retains Apple’s 4 performance plus 4 efficiency core configuration. Apple is not disclosing what generation CPU cores they’re using here, but based on the performance expectations and timing, there’s every reason to believe that these are the Avalanche and Blizzard cores that were first introduced on the A15.

With regards to performance, Apple is saying that the M2 offers 18% improved multi-threaded CPU performance versus the M1. The company does not offer a breakdown of clockspeeds versus IPC gains, but if our hunch about M2 being Avalanche/Blizzard is correct, then we already have a good idea of what the breakdown is. Relative to the Firestorm core in the A14/M1, Avalanche offers only modest performance gains, as Apple invested most of their improvements into improving overall energy efficiency. As a result, the bulk of the performance gains there come from increased clockspeeds rather than IPC improvements.

The performance CPU cores on M2 also come with a larger pool of L2 cache, which also serves to improve performance. Whereas M1 had 12MB of L2 cache shared among the cores, M2 brings this up to 16MB, a 4MB increase over both the M1 and for that matter the A15.

Based on what we’ve already seen with the A15, this bigger update in this generation is on the efficiency core side of matters. The Blizzard CPU cores are increasingly behaving like not-so-little cores, offering relatively high performance and a much wider backend design than what we see with other Arm efficiency cores. Among other things, Blizzard added a fourth Integer ALU, which combined with other changes gave A15 a significant (28%) performance increase in those cores. Carried over to M2, and it’s not unreasonable to expect similar gains, though the wildcard factor will be what clockspeeds Apple dials things to.

This, in turn, is also seemingly why Apple has decided to focus on MT performance for their Apple-to-Apple comparison. With the largest performance gains coming courtesy of the efficiency cores, in performance-bound situations it’s MT workloads which get to tap the E cores alongside the P cores that would see the greatest performance improvements. On the whole, Avalanche/Blizzard made for a modest year on the CPU microarchitecture front, and that looks to be carrying over for the M2 SoC.

Meanwhile on the GPU front, Apple is going bigger. Though reclusive as always about the underlying architecture – merely calling this a “next generation” GPU – M2 comes with 10 GPU cores baked in, up from 8 on the M1. Officially, this GPU is rated for 3.6 TFLOPS, which is a 1 TFLOPS more than the 8 core M1. As well, the new GPU comes with a larger shared L2 cache, though Apple isn’t disclosing the cache size there.

With a combination of a larger core count and what would seem to be a 10% or so increase in GPU clockspeeds (based on TFLOPS), Apple is touting two performance figures for the M2’s GPU. At iso-power (~12W), the M2 should deliver 25% faster GPU performance than the M1. However the M2’s GPU can, for better or worse, also draw more power than the M1’s GPU. At its full power state of 15 Watts, according to Apple is can deliver 35% more performance.

Overall this indicates that while Apple has been able to improve their energy efficiency – GPUs love running wide and slow – Apple’s peak GPU power consumption is going up. This should have minimal impact on light workloads, but it will be interesting to see what it means for relatively heavy and constant workloads, especially on the fanless MacBook Air. Meanwhile the GPU’s display controller remains seemingly unchanged, topping out at 6K for external monitors.

Tangential to the GPU updates, M2 also comes with an updated video encode/decode block, which at first glance looks a lot like a pared-down version of the block used on the M1 Pro/Max. Those SoCs added support for Apple’s ProRes and ProRes RAW codecs, and that support has now filtered back down into the base M2 SoC. As well, Apple is now officially supporting 8K video decode on the M2, whereas the M1, though never having an official resolution designation, was essentially a 4K part.

Finally, on the processing side of matters, the M2 is inheriting the A15’s updated neural engine. According to Apple, this is still a 16-core design, and it happens to have the same 15.8 trillion operations per second (TOPS) rating as the A15’s neural engine. Which, despite only being on par with the A15, still makes it 40% faster than the M1’s neural engine, which topped out at 11 TOPS.

Altogether, Apple is projecting a great deal of confidence in the performance of their second-generation Apple Silicon chip, and even more so its competitiveness versus Intel. While we’ll have to wait to get our hands on the hardware to confirm its performance, the M1 certainly lived up to claims there. So the expectations for M2 are similarly high.

Memory: LPDDR5-6400, Up To 24GB

While the core logic of Apple’s latest SoC would seem to be largely an enhanced version of the A15, it does have one very notable feature advantage: LPDDR5 support.

Whereas the vanilla M1 (and the A15) only supported LPDDR4x memory, the M2 supports the newer LPDDR5 memory standard. The biggest change of which is support for much higher memory clockspeeds; based on Apple’s figures, the M2 is running at 6400Mbps/pin (LPDDR5-6400), which is up significantly from the 4266Mbps/pin (LPDDR4x-4266) memory clockspeeds of the original M1. The net result is that, on the SoC’s 128-bit memory bus, the M2 has 100GB/second of memory bandwidth to play with, a 50% increase over the M1 (~68GB/sec).

Apple’s unconventional use of memory technologies remains one of their key advantages versus their competitors in the laptop space, so a significant increase in memory bandwidth helps Apple to keep that position. Improvements in memory bandwidth further improve every aspect of the SoC, and that especially goes for GPU performance, where memory bandwidth is often a bottlenecking factor, making the addition of LPDDR5 a key enabler for the larger, 10-core GPU. Though in this case, it's the M2 playing catch-up in a sense: the M1 Pro/Max/Ultra all shipped with LPDDR5 support first, the M2 is actually the final M-series chip/tier to get the newer memory.

Past that, Apple is once again placing their LPDDR5 memory packages on-chip with the processor die itself. So each M2 chip will need to be equipped with memory ahead of time, and the device supply is likely to fluctuate a bit based on memory capacity depending on what the most popular configurations are, especially early on.

M2 devices are available with either, 8GB, 16GB, or 24GB of memory. Given that Apple is still using just two stacks of memory, it looks like the company is finally taking advantage of LPDDR’s support for non-power-of-two die sizes (e.g. 12Gb dies), which allows them to get 12GB of memory into a single package without any further shenanigans. And assuming Apple replicates this down the line for the obligatory Pro/Max/Ultra SoCs, we should see the top memory capacities of all of Apple’s SoCs increase by 50% over the previous generation.

And the Rest: Updated ISP, Same USB

Rounding out today’s M2 announcement, there are a couple more items that warrant a quick call-out.

First, the M2 is getting an updated ISP as well as an updated Secure Enclave. Like other aspects of M2, these are likely inherited from the A15, which received similar updates as well.

Meanwhile, a look at the specs of the new MBA and MBP indicate that there haven’t been any notable changes in USB or other I/O support for the new SoC. M1 was already at the top of the curve in 2020 when it launched with USB4 support, so nothing has changed here. This does mean, however, that the SoC is seemingly still limited to Thunderbolt 3 support, despite the fact that Thunderbolt 4 has now been out for well over a year. Both the MBA and MBP are also shipping with two USB ports, so it would seem that’s still the native limit of the SoC.

Apple also hasn’t talked at all about PCIe capabilities. We’ll know more once we have the hardware in-hand, but at least for now there’s no reason to believe that Apple has added PCIe 5 support or changed the number of lanes available. I/O has remained something of a constraining factor for the entire Apple Silicon family, so it does make me wonder about what this means for the eventual Apple Silicon Mac Pro.

Available in July

Closing out today’s announcement, the M2 will be shipping in the new 2022 MacBook Air, as well as the refresh 2022 12-inch MacBook Pro. According to Apple, those devices will be available in July, with pre-orders open today.

In the meantime, the M1 isn’t going anywhere. Besides being at the heart of the Mac Mini – which didn’t receive an update today – Apple is keeping the 2020 M1-based MacBook Air around. So both versions of the entry-level M-series SoC will be sticking around for some time to come.

Comments Locked

171 Comments

View All Comments

  • Calin - Wednesday, June 8, 2022 - link

    The GTX3050 is a 75W card (IIRC), and the 3090 is a 250W (I think) card.
    Meanwhile, the M2 uses up to 15W for graphics.
    The power discrepancy is too great here, no chance the M2 will beat a card running at five times the power budget.
    Assuming the M2 Ultra Pro Max triples the power budget for the integrated GPU, it still won't reach parity with the 3050.
  • meacupla - Wednesday, June 8, 2022 - link

    No, the comparison was between the M1 Ultra and RTX 3050.
    The M1 Ultra consumes 215W at full load, where as the RTX 3050 only does around 75W at full load.
    Guess which one is faster by about 35%? It's the RTX 3050.
    So not only is the RTX 3050 more power efficient than the M1 Ultra, it does it on a worse node, and remains faster.

    Yet, somehow, I am expected to believe an M2 Ultra is going to match a RTX 3090, when the base model M2 only has a 35% improvement in graphics over the M1.
  • Ashan360 - Friday, June 10, 2022 - link

    Replace “Graphics” with “most current gaming and 3D modeling” and I’ll agree with you. In other software optimized GPU bound tasks, the M1 series chips is class competitive or leading when evaluating based on power efficiency, which is super important for laptops. Software optimization is just a matter of time now that there is such a quickly growing market of Apple Silicon Macs owned by affluent users to sell software for.
  • Willx1 - Friday, June 10, 2022 - link

    I thought the max kept up with a 3060,how does the ultra only compete with a 3050? I assuming you’re talking about gaming only?
  • Silver5urfer - Monday, June 6, 2022 - link

    Initially I was thinking Apple would go for N3 then it didn't made sense how fast Apple can switch given the % revenue cut of Macs for entire Apple stream. On top of the increasing costs. So maybe N3 will debut for M3.

    So a modest gain, I wonder how the cost scaling is, it's annoying to keep track of M1 release vs M2 release prices. I expect a bump by at-least $250-300.. Anyways Apple fans can buy this probably, but looking at this cadence of refresh. It's really stupid. I hate how the trend became nowadays, started with Smartphones about yearly refreshes. With the BGA machines it's even worse, you cannot even replace anything let alone ugprade. On top Apple's HW is super anti-DIY it's locked down to brim. Esp that Secure Enclave which is basically Intel ME / AMD PSP type backdoor system on the silicon.

    Their graphs are super idiotic lol just remove that garbage curve and draw a bar chart you get the performance. Also their GPU is going to be decimated soon by upcoming Nvidia and AMD's new designs on TSMC 5N.

    The CPU of M1 already got beaten by existing laptops when Andrei tested, esp it was only a meager % better over Zen 2+ Ryzen 4000 ultra low power BGA designs which were super inferior over even Comet Lake BGA parts and pathetic GPUs (1650 mobile LOL). IPC already was beaten by Alder Lake forget about MT since it cannot even beat many BGA processors.

    M1 Ultra was faster in GPU but it was nothing in front of already existing designs and Apple got slapped by even Verge on RTX3090 class claim and beaten by an RTX 2080Ti, which is 4 year old product and costs less than their highly expensive ($4000) transistor bloated design.

    I wonder if they are going to target RTX4090Ti / RX7950XT MCM this time with M2 Ultra ;)
  • name99 - Monday, June 6, 2022 - link

    You’re assuming there will be an M2 line structured the same way as the M1 line.
    I don’t believe so. Notice that no M2 mini was announced…

    My guess is the M line will be (approximately) the battery powered line, like old Intel U-class. There will be a new line announced soon’ish after A16/iPhone14 that will span the equivalent of H-class to Xeon, not attempting the ultra-low energies of the M line, with consequent higher performance.
    If I’m right, don’t expect an M2 Pro, Max; the new line, call it Q1, will start at the Pro level, presumably in an 8+2 config (probably there’ll be a 6+2 yield-sparing version) and go up from there, with the Max being an ultra config (two chips) and the Ultra being a four chip (32+8) config.

    Well, we’ll see! 3.5 months till September event, then maybe two months after that…
  • fazalmajid - Monday, June 6, 2022 - link

    The M1 Ultra was actually part of a three-chip set developed for their flailing AR glasses project until Jony Ive decided to gimp the project (like he did the butterfly keyboard) by nixing the auxiliary processing unit. Source: The Information.
  • mode_13h - Tuesday, June 7, 2022 - link

    The M1 Ultra in AR Glasses? You mean in a hip-mounted rendering pack, a la Magic Leap? Because even at the lowest power threshold, it doesn't seem like you could put it in anything head-mounted that Apple would actually sell.

    Another reason I'm skeptical the Ultra was really designed for AR is that you really don't need so many general-purpose cores for it. The computer vision portions of the pipeline are more suitable to run on DSPs or GPUs, while the rendering is obviously GPU-heavy. In MS' Hololens 2, did they even do any compute outside of the Snapdragon?
  • name99 - Tuesday, June 7, 2022 - link

    Not hip mounted.
    What's suggested in the patents is a base-station in the same room as the glasses.

    I think it's foolish to make predictions about how much compute the glasses will or will not need for various tasks, especially if Apple will be, for example, making aggressive use of new technology like Neural Radiance Fields as part of their rendering.

    Your argument is essentially "A fitbit can use some dinky ARM chip, therefore an Apple watch can also use same dinky ARM chip"; that only works if you assume that Apple wants to do the exact same things as fitbit, nothing more...
  • mode_13h - Tuesday, June 7, 2022 - link

    > What's suggested in the patents is a base-station in the same room as the glasses.

    Are you sure they were talking about AR, then? Because AR is really about being fully untethered. Making an AR system with a basestation would be like an iPhone that had only wifi connectivity. You can do it, but it doesn't make a ton of sense - especially when the rest of the world is making proper cell phones.

    > I think it's foolish to make predictions about how much compute the glasses will
    > or will not need for various tasks

    Set aside your mild disdain, for a moment, and let's look at what I actually said:

    "you really don't need so many general-purpose cores for it."

    Nothing you said contradicts that. I simply pointed out that general-purpose CPU cores aren't the best tool for the heavy computational tasks involved in AR. That (and power/heat) are the reasons it sounds strange to use something like the M1 Ultra, for it.

    > for example, making aggressive use of new technology like Neural Radiance Fields

    Sure, and you need neural engines for that - not general-purpose CPU cores. So, that tells us that the Ultra has too many CPU cores to have been designed *exclusively* for AR/VR. That's not saying AR/VR wasn't _on_the_list_ of things it targeted, but @fazalmajid suggested it was developed *specifically* for the "AR glasses project".

    > Your argument is essentially "A fitbit can use some dinky ARM chip,

    Not exactly. There's a baseline set of tasks needed to perform AR, and Hololens is an existence proof of what hardware specs you need for that. Sure, you could do a better job with even more compute, but at least we know an Ultra-class SoC isn't absolutely required for it. And when we're talking about a little battery-sucking heater you wear on your head, the baseline specs are very important to keep in mind. That said, Hololens 2 is a great improvement over the original, but still could use more resolution + FoV, so the benefit of more GPU horsepower is obvious.

Log in

Don't have an account? Sign up now