The Intel 9th Gen Review: Core i9-9900K, Core i7-9700K and Core i5-9600K Tested
by Ian Cutress on October 19, 2018 9:00 AM EST- Posted in
- CPUs
- Intel
- Coffee Lake
- 14++
- Core 9th Gen
- Core-S
- i9-9900K
- i7-9700K
- i5-9600K
Test Bed and Setup
As per our processor testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory running at the manufacturer's maximum supported frequency. This is also typically run at JEDEC subtimings where possible.
It is noted that some users are not keen on this policy, stating that sometimes the maximum supported frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance. While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds - this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date.
Test Setup | |||||
Intel 9th Gen | i9-9900K i7-9700K i5-9600K |
ASRock Z370 Gaming i7** |
P1.70 | TRUE Copper |
Crucial Ballistix 4x8GB DDR4-2666 |
Intel 8th Gen | i7-8086K i7-8700K i5-8600K |
ASRock Z370 Gaming i7 |
P1.70 | TRUE Copper |
Crucial Ballistix 4x8GB DDR4-2666 |
Intel 7th Gen | i7-7700K i5-7600K |
GIGABYTE X170 ECC Extreme |
F21e | Silverstone* AR10-115XS |
G.Skill RipjawsV 2x16GB DDR4-2400 |
Intel 6th Gen | i7-6700K i5-6600K |
GIGABYTE X170 ECC Extreme |
F21e | Silverstone* AR10-115XS |
G.Skill RipjawsV 2x16GB DDR4-22133 |
Intel HEDT | i9-7900X i7-7820X i7-7800X |
ASRock X299 OC Formula |
P1.40 | TRUE Copper |
Crucial Ballistix 4x8GB DDR4-2666 |
AMD 2000 | R7 2700X R5 2600X R5 2500X |
ASRock X370 Gaming K4 |
P4.80 | Wraith Max* | G.Skill SniperX 2x8 GB DDR4-2933 |
AMD 1000 | R7 1800X | ASRock X370 Gaming K4 |
P4.80 | Wraith Max* | G.Skill SniperX 2x8 GB DDR4-2666 |
AMD TR4 | TR 1920X | ASUS ROG X399 Zenith |
0078 | Enermax Liqtech TR4 |
G.Skill FlareX 4x8GB DDR4-2666 |
GPU | Sapphire RX 460 2GB (CPU Tests) MSI GTX 1080 Gaming 8G (Gaming Tests) |
||||
PSU | Corsair AX860i Corsair AX1200i |
||||
SSD | Crucial MX200 1TB | ||||
OS | Windows 10 x64 RS3 1709 Spectre and Meltdown Patched |
||||
*VRM Supplimented with SST-FHP141-VF 173 CFM fans ** After Initial testing with the ASRock Z370 motherboard, we noted it had a voltage issue with the Core 9th Gen processors. As a result, we moved to the MSI MPG Z390 Gaming Edge AC for our power measurements. Benchmarking seems unaffected. |
We must thank the following companies for kindly providing hardware for our multiple test beds. Some of this hardware is not in this test bed specifically, but is used in other testing.
274 Comments
View All Comments
eastcoast_pete - Sunday, October 21, 2018 - link
Yes; unfortunately, that's a major exception, and annoying to somebody like me who'd actually recommend AMD otherwise. I really hope that AMD improves it's AVX/AVX2 implementation and makes it truly 256 bit wide. If I remember correctly, the lag of Ryzen chips in 256 bit AVX vs. Intel is due to AMD using a 2 x 128 bit implementation (workaround, really), which is just nowhere near as fast as real 256 bit AVX. So, I hope that AMD gives their next Ryzen generation full 256 bit AVX, not the 2 x 128 bit workaround.mapesdhs - Sunday, October 21, 2018 - link
It's actually worse than that with pro apps. Even if AMD hugely improved their AVX, it won't help as much as it could so long as apps like Premiere remain so poorly coded. AE even has plugins that are still single-threaded from more than a decade ago. There are also several CAD apps that only use a single core. I once sold a 5GHz 2700K system to an engineering company for use with Majix, it absolutely blew the socks off their far more expensive XEON system (another largely single-threaded app, though not entirely IIRC).Makes me wonder what they're teaching sw engineering students these days; parallel coding and design concepts (hw and sw) was a large part of the comp sci stuff I did 25 years ago. Has it fallen out of favour because there aren't skilled lectures to teach it? Or students don't like tackling the hard stuff? Bot of both? Some of it was certainly difficult to grasp at first, but even back then there was a lot of emphasis on multi-threaded systems, or systems that consisted of multiple separate functional units governed by some kind of management engine (not unlike a modern game I suppose), at the time coding emphasis being on derivatives of C++. It's bizarre that after so long, Premiere inparticular is still so inefficient, ditto AE. One wonders if companies like Adobe simply rely on improving hw trends to provide customers with performance gains, instead of improving the code, though this would fly in the face of their claim a couple of years ago that they would spend a whole year focusing on improving performance since that's what users wanted more than anything else (I remember the survey results being discussed on creativcow).
eastcoast_pete - Sunday, October 21, 2018 - link
Fully agree! Part of the problem is that the re-coding single-thread routines that could really benefit from parallel/multi-thread execution costs the Adobes of this world money, especially if one wants it done right. However, I believe that the biggest reason why so many programs, in full or in part, are solidly stuck in the last century is that their customers simply don't know what they are missing out on. Once volume licensees start asking their software supplier's sales engineers (i.e. sales people) "Yes, nice new interface. But, does this version now fully support multithreaded execution, and, if not, why not?", Adobe and others will give this the priority it should have had all along.repoman27 - Friday, October 19, 2018 - link
USB Type-C ports don't necessarily require a re-timer or re-driver (especially if they’re only using Gen 1 5 Gbit/s signaling), but they do require a USB Type-C Port Controller.The function of that chip is rather different though. Its job is to utilize the CC pins to perform device attach / detach detection, plug orientation detection, establish the initial power and data roles, and advertise available USB Type-C current levels. The port controller also generally includes a high-speed mux to steer the SuperSpeed signals to whichever pins are being used depending on the plug orientation. Referring to a USB Type-C Port Controller as a re-driver is both inaccurate and confusing to readers.
willis936 - Friday, October 19, 2018 - link
Holy damn that's a lot of juice. 220W? That's 60 watts more than a 14x3GHz core IVB E5.They had better top charts with that kind of power draw. I have serious reservations about believing two DDR4 memory channels is enough to feed 8x5GHz cores. I would be interested in a study of memory scaling on this chip specifically, since it's the corner case for the question "Is two memory channels enough in 2018?".
DominionSeraph - Friday, October 19, 2018 - link
This chip would be faster in everything than a 14 core IVB E5, while being over 50% faster in single-threaded tasks.Also, Intel is VERY generous with voltage in turbo. Note the 9700K at stock takes 156W in Blender for a time of 305, but when they dialed it in at 1.025V at 4.6GHz it took 87W for an improved time of 301, and they don't hit the stock wattage until they've hit 5.2GHz. When they get the 9900K scores up I expect that 220W number to be cut nearly in half by a proper voltage setting.
3dGfx - Friday, October 19, 2018 - link
How can you claim 9900k is the best when you never tested the HEDT parts in gaming? Making such claims really makes anandtech look bad. I hope you fix this oversight so skyX can be compared properly to 9900K and the skyX refresh parts!!! -- There was supposed to be a part2 to the i9-7980XE review and it never happened, so gaming benchmarks were never done, and i9-7940X and i9-7920X weren't tested either. HEDT is a gaming platform since it has no ECC support and isn't marketed as a workstation platform. Curious that intel says the 8-core part is now "the best" and you just go along with that without testing their flagship HEDT in games.DannyH246 - Friday, October 19, 2018 - link
If you want an unbiased review go here...https://www.extremetech.com/computing/279165-intel...
Anandtech is a joke. Has been for years. Everyone knows it.
TEAMSWITCHER - Friday, October 19, 2018 - link
Thanks... but no thanks. Why did you even come here? Just to post this? WEAK!Arbie - Friday, October 19, 2018 - link
What a stupid remark. And BTW Extremetech's conclusion is practically the same as AT's. The bias here is yours.