The untold story of Intel's desktop (and notebook) CPU dominance after 2006 has nothing to do with novel new approaches to chip design or spending billions on keeping its army of fabs up to date. While both of those are critical components to the formula, its Intel's internal performance modeling team that plays a major role in providing targets for both the architects and fab engineers to hit. After losing face (and sales) to AMD's Athlon 64 in the early 2000s, Intel adopted a "no more surprises" policy. Intel would never again be caught off guard by a performance upset.

Over the past few years however the focus of meaningful performance has shifted. Just as important as absolute performance, is power consumption. Intel has been going through a slow waking up process over the past few years as it's been adapting to the new ultra mobile world. One of the first things to change however was the scope and focus of its internal performance modeling. User experience (quantified through high speed cameras mapping frame rates to user survey data) and power efficiency are now both incorporated into all architecture targets going forward. Building its next-generation CPU cores no longer means picking a SPECCPU performance target and working towards it, but delivering a certain user experience as well.

Intel's role in the industry has started to change. It worked very closely with Acer on bringing the W510, W700 and S7 to market. With Haswell, Intel will work even closer with its partners - going as far as to specify other, non-Intel components on the motherboard in pursuit of ultimate battery life. The pieces are beginning to fall into place, and if all goes according to Intel's plan we should start to see the fruits of its labor next year. The goal is to bring Core down to very low power levels, and to take Atom even lower. Don't underestimate the significance of Intel's 10W Ivy Bridge announcement. Although desktop and mobile Haswell will appear in mid to late Q2-2013, the exciting ultra mobile parts won't arrive until Q3. Intel's 10W Ivy Bridge will be responsible for at least bringing some more exciting form factors to market between now and then. While we're not exactly at Core-in-an-iPad level of integration, we are getting very close.

To kick off what is bound to be an exciting year, Intel made a couple of stops around the country showing off that even its existing architectures are quite power efficient. Intel carried around a pair of Windows tablets, wired up to measure power consumption at both the device and component level, to demonstrate what many of you will find obvious at this point: that Intel's 32nm Clover Trail is more power efficient than NVIDIA's Tegra 3.

We've demonstrated this in our battery life tests already. Samsung's ATIV Smart PC uses an Atom Z2760 and features a 30Wh battery with an 11.6-inch 1366x768 display. Microsoft's Surface RT uses NVIDIA's Tegra 3 powered by a 31Wh battery with a 10.6-inch, 1366x768 display. In our 2013 wireless web browsing battery life test we showed Samsung with a 17% battery life advantage, despite the 3% smaller battery. Our video playback battery life test showed a smaller advantage of 3%.

AnandTech Tablet Bench 2013 - Web Browsing Battery Life

For us, the power advantage made a lot of sense. We've already proven that Intel's Atom core is faster than ARM's Cortex A9 (even four of them under Windows RT). Combine that with the fact that NVIDIA's Tegra 3 features four Cortex A9s on TSMC's 40nm G process and you get a recipe for worse battery life, all else being equal.

Intel's method of hammering this point home isn't all that unique in the industry. Rather than measuring power consumption at the application level, Intel chose to do so at the component level. This is commonly done by taking the device apart and either replacing the battery with an external power supply that you can measure, or by measuring current delivered by the battery itself. Clip the voltage input leads coming from the battery to the PCB, toss a resistor inline and measure voltage drop across the resistor to calculate power (good ol' Ohm's law).

Where Intel's power modeling gets a little more aggressive is what happens next. Measuring power at the battery gives you an idea of total platform power consumption including display, SoC, memory, network stack and everything else on the motherboard. This approach is useful for understanding how long a device will last on a single charge, but if you're a component vendor you typically care a little more about the specific power consumption of your competitors' components.

What follows is a good mixture of art and science. Intel's power engineers will take apart a competing device and probe whatever looks to be a power delivery or filtering circuit while running various workloads on the device itself. By correlating the type of workload to spikes in voltage in these circuits, you can figure out what components on a smartphone or tablet motherboard are likely responsible for delivering power to individual blocks of an SoC. Despite the high level of integration in modern mobile SoCs, the major players on the chip (e.g. CPU and GPU) tend to operate on their own independent voltage planes.

A basic LC filter

What usually happens is you'll find a standard LC filter (inductor + capacitor) supplying power to a block on the SoC. Once the right LC filter has been identified, all you need to do is lift the inductor, insert a very small resistor (2 - 20 mΩ) and measure the voltage drop across the resistor. With voltage and resistance values known, you can determine current and power. Using good external instruments you can plot power over time and now get a good idea of the power consumption of individual IP blocks within an SoC.

Basic LC filter modified with an inline resistor

Intel brought one of its best power engineers along with a couple of tablets and a National Instruments USB-6289 data acquisition box to demonstrate its findings. Intel brought along Microsoft's Surface RT using NVIDIA's Tegra 3, and Acer's W510 using Intel's own Atom Z2760 (Clover Trail). Both of these were retail samples running the latest software/drivers available as of 12/21/12. The Acer unit in particular featured the latest driver update from Acer (version 1.01, released on 12/18/12) which improves battery life on the tablet (remember me pointing out that the W510 seemed to have a problem that caused it to underperform in the battery life department compared to Samsung's ATIV Smart PC? it seems like this driver update fixes that problem).

I personally calibrated both displays to our usual 200 nits setting and ensured the software and configurations were as close to equal as possible. Both tablets were purchased by Intel, but I verified their performance against my own review samples and noticed no meaningful deviation. All tests and I've also attached diagrams of where Intel is measuring CPU and GPU power on the two tablets:

Microsoft Surface RT: The yellow block is where Intel measures GPU power, the orange block is where it measures CPU power

Acer's W510: The purple block is a resistor from Intel's reference design used for measuring power at the battery. Yellow and orange are inductors for GPU and CPU power delivery, respectively.

The complete setup is surprisingly mobile, even relying on a notebook to run SignalExpress for recording output from the NI data acquisition box:

Wiring up the tablets is a bit of a mess. Intel wired up far more than just CPU and GPU, depending on the device and what was easily exposed you could get power readings on the memory subsystem and things like NAND as well.

Intel only supplied the test setup, for everything you're about to see I picked and ran whatever I wanted, however I wanted. Comparing Clover Trail to Tegra 3 is nothing new, but the data I gathered is at least interesting to look at. We typically don't get to break out CPU and GPU power consumption in our tests, making this experiment a bit more illuminating.

Keep in mind that we are looking at power delivery on voltage rails that spike with CPU or GPU activity. It's not uncommon to run multiple things off of the same voltage rail. In particular, I'm not super confident in what's going on with Tegra 3's GPU rail although the CPU rails are likely fairly comparable. One last note: unlike under Android, NVIDIA doesn't use its 5th/companion core under Windows RT. Microsoft still doesn't support heterogeneous computing environments, so NVIDIA had to disable its companion core under Windows RT.

Idle Power
Comments Locked


View All Comments

  • yyrkoon - Tuesday, December 25, 2012 - link

    First, no one claimed ( at least seriously / sanely) that "ARM" would devour Intel. There *are* far more ARM based processors out there in the world than Intel. That is a simple fact. ARM has also existed for a long time, albeit not quite as long as Intel if I remember right.

    ARM sells far more processors. ARM has processors in just about any processor controlled type device you can think of ( and many you probably have not ). However, what is considered "ARM" may be a simple Cortex M0 processor that costs only a few US dollars. Used as a simple keyboard controller, or countless other possible uses. These devices are also RISC based, and are made to do specific purpose compute tasks while using very little power. Even less while in low power mode ( sleep ) with the ability to sometimes wake from an interrupt in as little as 1-2 cycles( we're talking microseconds here).

    Lastly, if you wanted to compare processor revenue. You would have to compare profits from ARM, and ARM's partners who sell ARM based processors. You see, this is not an ARM vs Intel thing. This is the Intel vs the hoard that is ARM. "Thing". While also keeping in mind that costs can be considerably lower for ARM processors.
  • name99 - Tuesday, December 25, 2012 - link

    Yes and no.
    You are right about Intel's disadvantages. But it's also worth remembering that ther are huge numbers of embedded devices out there based on some flavor of PPC (lots of game consoles, lots of network devices, lots of auto entertainment systems) or MIPS. And neither IBM nor MIPS was able to take that embedded advantage into the "branded" CPU space.

    ARM is obviously different. They've managed to make their brand matter, and they are working hard on improvements (whereas both IBM and MIPS seem to be content to sell ever smaller dies of a design from the mid-nineties). But it would be unwise, IMHO, to assume too much advantage in ARM's plentiful very low-end sales.
  • yyrkoon - Tuesday, December 25, 2012 - link

    "ARM" is winning in the context that more android devices have sold in the last few years, than x86 PC's have ever sold. At least according to an article I read a few months back.

    However that was not meant to be my point. My point was meant to say that ARM has been around a while, and will continue to be around a lot longer. While their profits are multii-entity, not single.

    Personally, I like the fact that *now* Intel is paying attention.. It is good for the industry.
  • stimudent - Tuesday, December 25, 2012 - link

    Also consider Intel's masterful use kickbacks to manufacturers and suppliers as well as threats to those who won't conform. That should help them too against ARM.
  • mrdude - Tuesday, December 25, 2012 - link

    and convincing these same OEMs to use Intel is going to be a tougher sell now. Apple and Samsung have no desire nor need for an Atom in a tablet when they've got their own SoCs. And superior SoCs, might I add.

    Mobile applications don't care what ISA they're running on, thus Intel loses the x86 compatibility point here. You're not going to run Photoshop on your smartphone and while it might work on your tablet, you're going to be pulling your hair out due to it's lackluster performance for productivity apps (see Anand's Clover Trail's review). If Intel restricts their x86 Atoms to Win8 devices, they're going to have a hard time selling these in any large quantity.

    Then there's the issue of on-die GPU, which for tablets and smartphones is even more important than the CPU performance. That's one area where Intel still lags way behind the others. For mobile devices, gaming apps are the most popular. If Intel has great perf-per-watt and good CPU throughput but still lags woefully behind in GPU performance, the OEMs and consumers won't buy it. Asking a Clover Trail to game on a full HD display or even retina quality isn't going to work out well.

    Price is what matters here above all else. In order for Intel to maintain their fab advantage they also require selling loads of processors at high margins. With huge competitors in the mobile space (Qualcomm just surpassed Intel in market cap) who sell these by the boatloads, Intel's going to have a very tough time of it.

    It is great to see that the x86 power myth is busted, though. That 2-3% of die space dedicated to the x86 decoder doesn't seem to make too much of a difference. Now for the price and GPU portion...
  • Sahrin - Tuesday, December 25, 2012 - link

    Intel doesn't compete with ARM, they compete with Qualcomm, Samsung, nVidia, etc. There is no ISA-level competition; ISA is irrelavant...that's the thesis of the article.
  • Kidster3001 - Friday, January 4, 2013 - link

    The ISA argument is exactly what people said about desktop PC's in the early 80's. "Everything is owned by IBM and Motorola. Intel can't win with this new x86 architecture thing." 10 years later?

    Same thing happened in Servers. "It's all owned by RISC now, x86 will never succeed in servers." 10 years later?

    Then super-computers (HPC) "Intel doesn't stand a chance!" 10 years later?

    Let's wait and see what mobile looks like in 5 or 10 years. History tells us that once Intel decides to seriously play the game, they figure out a way to win.
  • mavere - Monday, December 24, 2012 - link

    I don't expect power efficiency here to compete well with the iPad, but I hope Intel gets there soon, if only to preempt any ideas Cupertino might have about moving the MBA line to ARM.

    Also, maybe Google's new Motorola subsidiary will do something with this. Samsung and Apple have their own chip designs, and MS doesn't really have room in its Surface + Surface Pro dichotomy for a slow(er) x86 part.
  • tipoo - Monday, December 24, 2012 - link

    I'll be curious to see if Intel designs get such a power/performance lead that it gets to a point where Apple would be foolish not to switch over.
  • Kevin G - Tuesday, December 25, 2012 - link

    From a hardware stand point, Intel could get there. The problem with transitioning from ARM to x86 would be one of application compatibility. Apple has to maintain their app ecosystem and a platform change would be very costly. On the other hand, there would be a benefit to Apple solidifying their operating systems on one platform (and possibly merging iOS and OS X themselves).

    The other factor is that Apple is designing not only their own SoC's but also their own CPU cores. That is a major investment and Intel would have to have a seemingly overwhelming product for Apple to write those investments off.

Log in

Don't have an account? Sign up now