Hands-on & More With Huawei's Mate 10 and Mate 10 Pro: Kirin 970 Meets Artificial Intelligenceby Ian Cutress on October 16, 2017 9:00 AM EST
This morning Huawei is taking the wraps off of their latest generation flagship smartphone, the Mate 10 series. Powered by subsidiary HiSilicon’s Kirin 970 SoC, the new phones are a mix of something old and something new for the company. With a design that is not simply a carbon copy of the earlier Mate phones but is still very much a traditional smartphone, Huawei’s latest flagships are a mix of old and new; tried and true paired with the cutting edge. It’s an interesting balancing act, and one that, if consumers agree, will further bolster Huawei’s success in the international smartphone market while at the same time pushing a nascent technology to the forefront of the mobile industry.
That technology is, of course, artificial intelligence, which has become the buzzword for the latter half of this decade in the world of technology. Long a lofty goal of computer science – if not perhaps its holy grail – recent advancements in the field have opened the door to new methods and new applications. And while this era of neural networking-driven AI is not by any means producing devices that actually think like a human, even this weak form of AI is, in the right use cases, far more capable than anything that has come before it.
Of course, the usefulness of having neural networking hardware is only as good as the appications that run on it, and in these still-early days of the field, the industry as a whole is trying to figure out what those applications should be. Having a self-driving car or a smart NPC in a video game makes sense, but applying it to a smartphone is confusing at first. Huawei announced that its new Kirin 970 chipset had dedicated silicon for running artificial intelligence networks, and the Mate 10 series is going to be the first device running this chip. Today, they announced the smartphones and unveiled the features.
The Mate 10, Mate 10 Pro, and Mate 10 Porsche Design
The devices themselves are part of Huawei’s yearly cadence with the Mate series. Every year at around this time we see a new smartphone SoC and the first two devices that power it: the Mate and the Mate Pro. Both the hardware and the design are meant to be iterative – Huawei’s HiSilicon division takes the ‘best’ IP available from ARM to develop the processor, and the design team takes cues from the industry as to what will be the next statement in aesthetics.
One of the big trends for 2017 (and moving into 2018) is full-screen display technology. In previous years, manufacturers have often quoted ‘screen-to-body’ ratios in order to show how much of the face of the device is taken up by screen, but it is this year that has started to push the boundaries on this aspect. Arguably devices such as Xiaomi’s MI MIX range were instrumental in pushing this, but the upside is more screen for everyone or the same sized screen in smaller devices. Huawei is pushing the screen with its ‘FullView Display’ (the marketing name for it).
The Mate 10 comes with a 5.9-inch FullView display, using a glass front for the 2560x1440 LCD display, coming in at 499 pixels per inch. Huawei is quoting panels capable of a 1500:1 contrast ratio, while the color space is listed at a less-than-useful metric of 96% NTSC.
The Mate 10 Pro (and Porsche Design) are slightly bigger with their 6.0-inch displays, although this time it comes with an OLED screen at 2160x1080 resolution. This is a lower pixel density (402 ppi) and resolution compared to the regular Mate 10, but is rated at 112% NTSC and 7000:1 contrast. The smaller resolution and use of OLED might also assist in battery life as well, and overall the unit is lighter than the Mate 10.
Neither device goes to the extreme with the display completely covering the front, as it requires certain methods of moving the internals such as the camera (on the bottom on the MI MIX, on the notch on the iPhone X) as well as how to implement fingerprint technology. One of the biggest design deviations for this generation of Mate devices is that the Mate 10 regular edition now has the fingerprint sensor on the front of the phone, rather than the rear. In my eyes this is a pretty big jump, given that the Mate S, the Mate 8 and the Mate 9 regular editions all had fingerprint sensors on the rear. The Mate 10 Pro, by contrast, does keep the sensor on the rear.
This pre-production unit hasn't updated the logo
There is no difference between each of the devices for the SoC inside, with each device getting the full-fat Kirin 970. This means four ARM Cortex A73 cores at 2.36 GHz and four ARM Cortex A53 cores at 1.8 GHz. These are paired with Mali G72 MP12 graphics (at an unstated frequency), the i7 sensor processor, and Huawei’s new Neural Processing Unit, or NPU (more on this later). All of the units will use Huawei’s latest Category 18 integrated LTE modem, capable of 1.2 Gbps download using 4x4 MIMO on 3-carrier aggregation with 256-QAM. Each device supports dual-SIM LTE concurrently (along with dual-SIM VoLTE), although this limits downloads to Category 16. Uploads are at Category 13.
Only one option for memory and storage is available with the Mate 10, with Huawei settling on 4GB of LPDDR4X DRAM and 64GB of NAND for storage, with microSD card support further augmenting that, though by taking one of the SIM slots. For some reason it says limited to 256GB, though I will ask about the new 400GB microSD cards.
The Mate 10 Pro will be available in 4GB/64GB and 6GB/128GB versions, although the latter will be dependent on region – we are told around 20 countries are on the initial list. The Mate 10 Porsche Design model will be only available in a 6GB/256GB configuration, similar to last year.
All the devices come with the typical dual-band 802.11ac Wi-Fi support, extending to BT4.2, and will include NFC. All three devices use USB Type-C, but only the base model has a headphone jack. Despite the Mate 10 Pro/PD being physically bigger than the standard Mate 10, all three devices use a 4000 mAh battery which is TUV certified for SuperCharge. That in itself is fairly large for a modern flagship, which is perhaps a benefit of only a few smartphone companies now competing in the ‘under 7mm’ metric for thickness. The Huawei devices come in at 8.2mm and 7.9mm for that.
The cameras on all the devices are identical as well, with Huawei further leveraging the Leica band cooperation. The front camera is an 8MP f/2.0 unit, while the rear camera does something a little bit different this time around. The dual camera is vertical, like the Mate 10, but without the extra protective shroud around the lenses. The cameras are similar 12MP RGB and 20MP monochrome, as found on last year’s flagships, although this time they are both f/1.6 and using Leica SUMMILUX-H lenses with AI-powered bokeh. This allows for ‘2x hybrid zoom’ (which we established last year is more like a crop than a zoom), but the phones also have 4-way focus (PDAF, CAF, Laser, Depth) and have a dual LED flash.
Huawei will launch these devices on Android 8, using their custom implementation called EMUI. Last generation was EMUI 5, and this generation will be called EMUI 8. The reason for the jump is two-fold: the number 8 is a highly positive number in Chinese culture, but also it addresses some comments as to why the EMUI numbering system was ‘behind’ the Android version. Huawei intends to keep EMUI’s version number paired with the Android version for the foreseeable future.
|Huawei Mate 10 Series|
|Mate 10||Mate 10 Pro||Mate 10 Porsche Design|
|SoC||HiSilicon Kirin 970
4x Cortex-A53 @ 1.84GHz
4x Cortex-A73 @ 2.36GHz
ARM Mali-G72 MP12 @ ?
|Dimensions||150.5 x 77.8 x 8.2 mm
|154.2 x 74.5 x 7.9 mm
|NAND||64 GB (UFS 2.1)
|64/128 GB (UFS 2.1)||256 GB (UFS 2.1)|
|Battery||4000 mAh (15.28 Wh)
|Front Camera||8MP, 1/2.0"|
|Rear Camera||Color: 12MP, 1/1.6
Monochrome: 20MP, f/1.6
PDAF + Laser AF + Contrast AF + Depth,
OIS, HDR, dual-tone LED flash
|Modem||HiSilicon LTE (Integrated)
2G / 3G / 4G LTE
Category 18/16 Download
Category 13 Upload
|SIM Size||2x NanoSIM (dual standby)|
|Wireless||802.11a/b/g/n/ac, BT 4.2 LE, NFC, IrLED, GPS/Glonass/Galileo/BDS|
|Connectivity||USB 2.0 Type-C, 3.5mm headset|
|Launch OS||Android 8.0 with EMUI 8.0|
|Launch Price||699 Euro (4/64)||799 Euro (6/128)||1349 Euro|
*may be more than some
Pricing for the Mate 10 and Mate 10 Pro is likely to mirror the pricing for last year’s flagships. This means around $549-$599 for the regular edition and $649-$699 for the Pro. Add in another $100 for the higher capacity model, and probably another $250-$400 for the Porsche Design version. (Updated in table)
Post Your CommentPlease log in or sign up to comment.
View All Comments
name99 - Monday, October 16, 2017 - linkThink of AI as a pattern recognition engine. What does imply?
Well for one thing, the engine is only going to see patterns in what it is FED! So what is it being fed?
Obvious possibilities are images (and we know how that's working out) and audio (so speech recognition+translation, and again we know how that's working out). A similar obvious possibility could be stylus input and so writing recognition, but no-one seems to care much about that these days.
Now consider the "smart assistant" sort of idea. For that to work, there needs to be a way to stream all the "activities" of the phone, and their associated data, through the NPU in such a way that patterns can be detected. I trust that, at least the programmers reading this, start to see what sort of a challenge that is. What are the relevant data structures to represent this stream of activities? What does it mean to find some pattern/clustering in these activities --- how is that actionable?
Now Apple, for a few years now, has been pushing the idea that every time a program interacts with the user, it wraps up that interaction in a data structure that describes everything that's being done. The initial reason for this was, I think, for the on-phone search engine, but soon the most compelling reason for this (and easiest to understand the idea) was Continuity --- by wrapping up an "activity" in a self-describing data structure, Apple can transmit that data structure from, say, your phone to your mac or your watch, and so continue the activity between devices.
Reason I bring this up is that it obviously provides at least a starting point for Apple to go down this path. But only a starting point. Unlike images, each phone does not generate MILLIONS of these activities, so you have a very limited data set within which to find patterns. Can you get anything useful out of that? Who knows?
Android also has something called Activities, but as far as I can tell they are rather different from the Apple version, and not useful for the sort of issue I described. As far as I know Android has no such equivalent today. Presumably MS Will have to define such an equivalent as part of their copy of (as subset of) Continuity that's coming out with Fall Creators, and perhaps they have the same sort of AI ambitions that they hope to layer upon it?
Valantar - Tuesday, October 17, 2017 - linkThe thing is, the implementation here is a "pattern recognition engine" /without any long-term memory/. In other words: it can't learn/adapt/improve over time. As such, it's as dumb as a bag of rocks. I wholeheartedly agree with not caring if the phone can recognize cats/faces/landscapes in my photos (which, besides, a regular non-AI-generated algorithm can do too, although probably not as well). How about learning the user's preferences in terms of aesthetics, subject matter, gallery culling? That would be useful, especially the last one: understanding what the fifteen photos you just took were focusing on, and then picking the best one in terms of focus, sharpness, background separation, colour, composition, and so on. Sure, also a task an algorithm could do (and do, in some apps), but sufficiently complex that it's likely that an AI that learns over time would do a far better job. Not to mention that an adaptive AI in that situation could regularly present the user with prompts like "I selected this as the best shots. These were the runner-ups. Which do you prefer?" which would give valuable feedback to adjust the process.
serendip - Tuesday, October 17, 2017 - linkI do plenty of manual editing and cataloging of photos shot on the phone, in a mirror of the processes I use for a DSLR and a laptop. I don't think an AI will know if I want to do a black and white series in Snapseed from color photos, it won't know which ones to keep or delete, and it won't know about the folder organization I use.
So what exactly is the AI for?
tuxRoller - Tuesday, October 17, 2017 - linkIt COULD improve over time if they pushed out upgraded NN. Off device training with occasional updates to supported devices is going to be the best option for awhile.
Krysto - Monday, October 16, 2017 - linkDisappointed they aren't using the AI accelerator for more advanced computational photography, like say better identifying a person's face and body and then doing the boken around that, or improving dynamic range by knowing exactly which parts to expose more, and so on.
Auto-switching to a "mode" is really something that other phone makers have had for a few years now in their "Auto" mode.
Ian Cutress - Monday, October 16, 2017 - linkThis was one of my comments to Huawei. I was told that the auto modes do sub-frame enhancements for clarity, though I'm under the assumption those are the same as previous tools and algorithms and not AI driven. Part of the issue here is AI can be a hammer, but there needs to be nails.
melgross - Monday, October 16, 2017 - linkWith the supposed high performance of this neural chip when compared to Apple’s right now, I’m a bit confused.
Since we don’t know exactly how this works, and we know even less about Apple’s, how can we even begin to compare performance between them?
Hopefully, you will soon have a real review of the 8/8+. As well as the deep dive of then SoC, something which was promised for last year’s model, but never materialized.
A comparison between these two new SoCs will be interesting.
name99 - Monday, October 16, 2017 - linkApple gave an "ops per sec" number, Huawei gave a FLOPS number. One was bigger than the other.
That's all we have.
There are a million issues with this. Are both talking about 32-bit flops? Or 16-bit? Maybe Apple meant 32-bit FLOPs and Huawei 16-bit?
And is FLOP actually a useful metric? Maybe in real situations these devices are really limited by their cache or memory subsystems?
To be fair, no-one is (yet) making abig deal about this precisely because anyone who know anything understands just how meaningless both the numbers are. It will take a year or three before we have enough experience with what the units do that we CARE about, and so know what to bother benchmarking or how to compare them.
Baidu have, for example, a supposed NPU benchmark suite, but it kinda sucks. All it tests is the speed of some convolutions at a range of different sizes. More problematically, at least as it exists today, it's basically C code. So you can look up the performance number for an iPhone, but it's meaningless because it doesn't even give you the GPU performance, let alone the NPU performance.
We need to learn what sort of low-level performance primitives we care about testing, then we need to write up comparable cross-device code that uses the optimal per-device APIs on each device. This will take time.
melgross - Wednesday, October 18, 2017 - linkThis is the problem I’m thinking about. We don’t have enough info to go by.
varase - Tuesday, November 21, 2017 - linkI assumed Apple's numbers were something in the inferences/second ballpark as this was a neural processor (and MLKit seems to be processing data produced by standard machine learning models). We know the Apple neural processor is used by Face ID, first as a gatekeeper (detect attempts to fake it out, and then attempt to see if the face in view is the same one store stored in the secure enclave.
Flops seems to imply floating point operations/second.
Color me confused.