Samsung Announces New 9810 SoC: DynamiQ & 3rd Gen CPUby Brett Howse & Andrei Frumusanu on January 3, 2018 10:30 PM EST
- Posted in
- Exynos 9810
After teasing the Exynos 9810 in CES related press material back in early November as well as having early announcement about the new modem capabilities last summer we now finally see the official announcement of the new SoC ahead of CES next week.
The new Exynos 9810 much like the Snapdragon 845 announced a few weeks ago in December, is a major upgrade on the CPU side of things as we do the migration towards a DynamiQ cluster configuration. Here we find Samsung’s third-generation custom core, the Exynos M3. We don’t know much about the micro-architectural changes of the new core, however Samsung has stated that the new CPU has a wider pipeline, and improved cache memory. What we expect is a large overhaul of the memory subsystem in the private L2 cache, as well as a larger L3 which will bring major performance uplifts in memory access during heavy workloads. Coupled with the M3 cores we see ARM’s new A55 little cores used as the efficiency cluster.
|Samsung Exynos SoCs Specifications|
|SoC||Exynos 9810||Exynos 8895|
|CPU||4x Exynos M3 @ 2.9 GHz
4x 512KB L2 ??
4x Cortex A55 @ 1.9 GHz
4x 128KB L2
4096KB L3 DSU ??
|4x Exynos M2 @ 2.314 GHz
4x Cortex A53 @ 1.690GHz
|GPU||Mali G72MP18||Mali G71MP20
|4x 16-bit CH
LPDDR4x @ 1794MHz
|4x 16-bit CH
LPDDR4x @ 1794MHz
|Media||10bit 4K120 encode & decode
H.265/HEVC, H.264, VP9
|4K120 encode & decode
H.265/HEVC, H.264, VP9
|Modem||Shannon Integrated LTE
DL = 1200 Mbps
6x20MHz CA, 256-QAM
UL = 200 Mbps
2x20MHz CA, 256-QAM
|Shannon 355 Integrated LTE
DL = 1050 Mbps
5x20MHz CA, 256-QAM
UL = 150 Mbps
2x20MHz CA, 64-QAM
Samsung hasn't announced all of the new CPU parameters yet, but they have announced a 2.9 GHz maximum frequency for the M3 cluster, which is a large step up over the 2.3 GHz of the outgoing model. This is thanks to the Exynos 9810 being produced on the second generation 10nm manufacturing node, 10LPP, which promises up to 10% performance increases at isi-power or a 15% decrease in power at iso-performance.
With the increased IPC that is expected from the new cores, and the faster frequency, this should be a significant increase in performance from the outgoing model, which we saw in our comparison test was fairly evenly matched with the Qualcomm Snapdragon 835. Samsung claims up to double the single-thread performance and 40% uplift in multi-thread performance. The single-thread performance claim would be the single biggest performance jump in the industry and if we're even just talking simple GeekBench scores that would put the Exynos 9810 at the performance levels of Apple's A10 and A11. Of course having this on a quad-core CPU begs the question of how it's achieved and if this 2.9GHz clock is on all cores or just a single-core boost clock? And at what kind of TDP does it achieve this massive performance boost?
The GPU follows the lead of the Kirin 970 in adopting the new Mali G72 Heimdall GPU IP from ARM. What stands out here is that Samsung has actually decreased the GPU core count from 20 to 18 while still managing to increase performance through an increase of the clock frequency from 546MHz to a higher undisclosed frequency likely in the mid 700MHz range. The performance increase is conservative at only 20%, but more importantly efficiency should be up thanks to the new GPU and process.
The modem as disclosed earlier in 2017 now adheres to 3GPP Release 13 and implements UE Category 18 up to 1200Mbps in its downlink capabilities through up to 6xCA and 256-QAM, slightly exceeding the capabilities of the Snapdragon 845 and Kirin 970 on paper, but likely something to be tested in practice. The Exynos 9810 modem also is the first one to employ 256-QAM in the uplink and thus achieving up to 200Mbps speeds as a UE Category 18 in the uplink as well.
As is usual with new flagship Exynos announcements the SoC is likely already in mass production and waiting to be used in the new Galaxy S9 series which in turn will be expected in the MWC timeframe at the end of February.
Source: Samsung Newsroom
Post Your CommentPlease log in or sign up to comment.
View All Comments
BurntMyBacon - Thursday, January 4, 2018 - link@spunjji
YES!!! This is a transient current draw issue. There is more than one way to fix this problem. A few obvious(?) solutions are:
1) Software safeguards try to prevent actions that cause larger transients (What Apple selected)
2) Using a SoC or other hardware with less current draw or transient variability
3) Use a larger battery (oversized batteries relative to the task have less voltage drop on transient events)
4) Use a more advanced power regulation circuit (This is not entirely unlike how more advanced VRM circuitry on motherboards will have less voltage droop due to transients)
Each of these solutions involve trade-offs in cost, performance, size, and weight. The software solution could have been chosen for several reason including, but not limited to:
1) The problem may not have been obvious until after the phones were already shipping preventing hardware solutions to existing models. This scenario is weak, but not impossible as Apple released the software solution prior to major issues if I recall correctly.
2) Hardware solutions involved trade-offs that Apple was unwilling to make. (Lower performance, larger device, higher manufacturing cost, etc.)
3) The decision makers decided that a software solution was cheaper and wouldn't be noticeable within the average usage cycle of their target audience. (U.S. carriers have a two year "free" upgrade cycle). Motivating people to upgrade from older phones would be a windfall in this scenario.
4) (Conspiracy theory to which I don't subscribe) There is no major battery issue or the issue was purposefully build into the device. The software "solution" is the latest in a long line of attempts by Apple to force their customers to upgrade.
I won't go into why the conspiracy theory is unlikely other that to say that there are simpler explanations that seem more likely.
Spunjji - Thursday, January 4, 2018 - link"Technically very sensible"
Not even slightly, it's a kludge. I know of a few Android devices in my time that have had similar premature-shutdown issues after reaching 2-3 years of age. Meanwhile nearly every iPhone 6 / 6S owner I know has had this problem starting after iOS 10 was released and between 12-18 months of age. Speaking less anecdotally, I used to work in an Apple store and the number of people coming in with battery issues was tremendous.
Apple made out that it was a problem with a batch of batteries, which the very testing we performed in-store suggested was utterly untrue. They have been thoroughly opaque all the way through (as pretty much every manufacturer is when they make a cock-up of this magnitude, to be fair).
What seems to be unique to Apple is the degree to which people are prepared to run defence for them on a problem that is entirely of their own making and does not affect any other manufacturer's devices to the same extent.
FullmetalTitan - Thursday, January 4, 2018 - linkI'm more inclined to believe it is a systematic design issue given how tight the performance specs are for apple devices. The acceptable tolerance window for an A8/9/10 chip was significantly narrower than the same generation Exynos or Snapdragon part, and yet the Apple parts are overwhelmingly more likely to need to be throttled in this way to maintain basic functionality.
Zingam - Saturday, January 6, 2018 - linkBut does it melt down and vomit Specters?
shabby - Wednesday, January 3, 2018 - linkWhen are we going to go past 29gb/s of memory bandwidth? The sd810 had 25gb/s and the 820/835 had 29gb/s, what's the holdup?
webdoctors - Wednesday, January 3, 2018 - linkSoCs have gone past that, but not in phones:
Maybe there's no performance demand in phones to drive more BW ?
It does indicate perf will be limited for any high BW applications like graphics on mobile for the foreseable future. Will need to wait for Apple to bring high quality apps to mobile.
MrSpadge - Thursday, January 4, 2018 - linkBandwidth costs power, so it's not wise to use more than you really need. The recent ARM designs focussed on extracting better real world performance from the same maximum bandwidth, which I'd say is better than simply adding more hardware.
ZeDestructor - Thursday, January 4, 2018 - linkAlso pin-count and memory controller die-area. If you want to keep the PoP setup for space contraints, you become unable to grow the pin count. Result: we're probably stuck with a 64bit bus on phone until HBM/HMC and interposers/EMIB get cheap enough.
jjj - Wednesday, January 3, 2018 - linkAny chance you guys are testing the fixes for Spectre and Meltdown on ARM? And if you do test, can you please also look at power and not just perf?
Ryan Smith - Thursday, January 4, 2018 - linkhttps://www.anandtech.com/show/12214/understanding...