Comments Locked

120 Comments

Back to Article

  • realbabilu - Monday, November 2, 2020 - link

    That Larger cache maybe need specified optimized BLAS.
  • Kurosaki - Monday, November 2, 2020 - link

    Did you mean BIAS?
  • ballsystemlord - Tuesday, November 3, 2020 - link

    BLAS == Basic Linear Algebra System.
  • Kamen Rider Blade - Monday, November 2, 2020 - link

    I think there is merit to having Off-Die L4 cache.

    Imagine the low latency and high bandwidth you can get with shoving some stacks of HBM2 or DDR-5, whichever is more affordable and can better use the bandwidth over whatever link you're providing.
  • nandnandnand - Monday, November 2, 2020 - link

    I'm assuming that Zen 4 will add at least 2-4 GB of L4 cache stacked on the I/O die.
  • ichaya - Monday, November 2, 2020 - link

    Waiting for this to happen... have been since TR1.
  • nandnandnand - Monday, November 2, 2020 - link

    Throw in an RDNA 3 chiplet (in Ryzen 6950X/6900X/whatever) for iGPU and machine learning, and things will get really interesting.
  • ichaya - Monday, November 2, 2020 - link

    Yep.
  • dotjaz - Saturday, November 7, 2020 - link

    That's definitely not happening. You are delusional if you think RDNA3 will appear as iGPU first.

    At best we can hope the next I/O die to intergrate full VCN/DCN with a few RDNA2 CUs.
  • dotjaz - Saturday, November 7, 2020 - link

    Also doubly delusional if think think RDNA3 is any good for ML. CDNA2 is designed for that.
    Adding powerful iGPU to Ryzen 9 servers literally no purpose. Nobody will be satisfied with that tiny performance. Guaranteed recipe for instant failure.

    The only iGPU that would make sense is a mini iGPU in I/O die for desktop/video decoding OR iGPU coupled with low end CPU for an complete entry level gaming SOC aka APU. Chiplet design almost makes no sense for APU as long as GloFo is in play.
  • dotjaz - Saturday, November 7, 2020 - link

    *serves
  • Samus - Monday, November 9, 2020 - link

    That's not true. There were numerous requests from OEM's for Intel to make iGPU-enabled XEONs for the specific purpose of QuickSync, so there are indeed various applications other than ML where an iGPU in a server environment is desirable.
  • erikvanvelzen - Saturday, November 7, 2020 - link

    Ever since the Pentium 4 Extreme Edition I've wondered why intel does not permanently offer a top product with a large L3 or L4 cache.
  • lemmemakethis - Thursday, December 3, 2020 - link

    Great blog post for better understanding <a href="https://farmslik.com/sales/">Buy rams near me </a>
  • plonk420 - Monday, November 2, 2020 - link

    been waiting for this to happen ...since the Fury/Fury X. would gladly pay the $230ish they want for a 6 core Zen 2 APU but even with "just" 4c8t + Vega 8 (but preferably 11) + HBM(2)
  • ichaya - Monday, November 2, 2020 - link

    With the RDNA2 infinitycache announcement and the increase (~2x) in effective BW from it, and we know Zen has always done better with more memory BW, so it's just dead obvious now that an L4 cache on the I/O die would increase performance (especially in workloads like gaming) more than it's power cost.

    I really should have said waiting since Zen 2, since that was the I/O die was introduced, but I'll settle for eDRAM or SRAM L4 on the I/O die as that would be easier than a CCX with HBM2 as cache. Some HBM2 APUS would be nice though.
  • throAU - Monday, November 2, 2020 - link

    I think very soon for consumer focused parts, on package HBM won't necessarily be cache, but they'll be main memory. End users don't need massive amounts of RAM in end user devices, especially as more workload moves to cloud.

    8 GB of HBM would be enough for the majority of end user devices for some time to come and using only HBM instead of some multi-level caching architecture would be simpler - and much smaller.
  • Spunjji - Monday, November 2, 2020 - link

    Really liking the level of detail from this new format! Fascinated to see how the Broadwell secret sauce has stood up to the test of time, too.

    Hopefully the new gaming CPU benchmarks will finally put most of the benchmark bitching to bed - for sure it goes to show (at quite some length) that the ranking under artificially CPU-limited scenarios doesn't really correspond to the ranking in a realistic scenario, where the CPU is one constraint amongst many.

    Good work all-round 👍👍
  • lemurbutton - Monday, November 2, 2020 - link

    Anandtech: We're going to review a product from 2015 but we're not going to review the RTX 3080, RTX 3090, nor the RTX 3070.

    If I were management, I'd fire every one of the editors.
  • e36Jeff - Monday, November 2, 2020 - link

    The guy that tests GPUs was affected by the Cali wildfires. Ian wouldn't be writing a GPU review regardless, he does CPUs.
  • Qasar - Monday, November 2, 2020 - link

    seems a few people either dont understand, or care about that.
  • 29a - Monday, November 2, 2020 - link

    What's the excuse for fucking up CPU reviews, Ian does those? I'm not sure we ever got a finished Ryzen review.
  • Qasar - Monday, November 2, 2020 - link

    if you are unhappy with things here, then feel free to go else where.
    problem solved
  • Ian Cutress - Tuesday, November 3, 2020 - link

    Which Ryzen review are you talking about?
  • 29a - Tuesday, November 3, 2020 - link

    The first one.
  • 29a - Tuesday, November 3, 2020 - link

    Looks like the editor just deleted the unfinished pages I can't find the blank page about StoreMi anymore. I guess that's one way to finish an article.
  • gagegfg - Monday, November 2, 2020 - link

    I ask the same thing. Could it be something wrong with NVIDIA?
    And Ryzen 5000 for when? :))
  • 29a - Monday, November 2, 2020 - link

    Agreed this site has went to shit since Anand left.
  • plonk420 - Monday, November 2, 2020 - link

    TBF, the transition to youtube was happening at that time. my other fave site, TechReport died, too, soon after. at least GN took over covering 0.1% lows
  • Makaveli - Monday, November 2, 2020 - link

    Feel free to close the door on your way out.
  • GreenReaper - Monday, November 2, 2020 - link

    Just wait, they'll turn the lights off too. Then we'll have dark mode!
  • Smell This - Monday, November 2, 2020 - link


    Some of youin's what some cheese with that whine ?
  • 29a - Monday, November 2, 2020 - link

    I've been reading this site since '97 and I don't recall Anand releasing any half finished articles that were never completed or skip any major hardware releases. So shove your cheese up your ass.
  • Qasar - Monday, November 2, 2020 - link

    * hands 29a sone cheese *
    as i said, if unhappy, go some where else.
  • Stochastic - Monday, November 2, 2020 - link

    Ampere reviews are a dime a dozen. This kind of article is something you only see from Anandtech and a handful of other sites.
  • Tomatotech - Monday, November 2, 2020 - link

    Agree. Dozens of RTX 3070 / 80 / 90 reviews everywhere but only one 2020 Broadwell eDRAM deep dive on the whole web (I think) and guess what, it’s on AnandTech. That’s worth cherishing.
  • brucethemoose - Monday, November 2, 2020 - link

    TBH I'm more interested in these esoteric deep dives.

    There are 100 other sites that reviewed Ampere on launch day.
  • liquid_c - Monday, November 2, 2020 - link

    Go read pcgamer, please.
  • powerarmour - Monday, November 2, 2020 - link

    But hey, here's another Intel article while you wait eh?
  • Makaveli - Monday, November 2, 2020 - link

    There are plenty of reviews out already for you go get a general idea of the product.

    Quit your crying.
  • bernstein - Monday, November 2, 2020 - link

    GDDR6 would be ideally suited as an L4 CPU cache... it has >500GB/s throughput and relatively low cost...
  • e36Jeff - Monday, November 2, 2020 - link

    Sure, if you build a 256-bit bus and somehow cram 8 GDDR6 chips onto the CPU package. You'd also be losing 30-40W of TDP to that.
    This is an application that HBM2 would be much better for. You can easily cram up to 4GB into the package with a much lower TDP impact and still get your 500+GB/s throughput. The biggest issue for this is going to be the impact of having to add in another memory controller and the associated die space and power that it eats up.
  • FreckledTrout - Monday, November 2, 2020 - link

    This is also how I see it playing out. Certainly by the time Intel/AMD switch to using GAAFET maybe before. You just need a couple die shrinks that bring densities up and power down.
  • bernstein - Monday, November 2, 2020 - link

    scratch that, GDDR6 has much too high latency...
  • stanleyipkiss - Monday, November 2, 2020 - link

    The 5775C was ahead of its time. Don't know why they didn't go down that rabbit hole (of increasing the size with each gen)
  • hecksagon - Monday, November 2, 2020 - link

    Adding an extra 84mm2 of die area is a recipe for margin erosion, especially when the benefit is situational.
  • CrispySilicon - Monday, November 2, 2020 - link

    Well, I use a 5775C for my main home PC (using it now) and it's more than that. Broadwell was designed for low power. It doesn't run well over 4Ghz and it's not made to.

    My rig idles at about 800mhz, clocks up to 4ghz on all cores, 2ghz on the edram, and 2ghz on DDR3L (overclocked 1866 hyperx fury), yes, 3L, becuase THAT'S where the magic happens. Low power performance.

    I've also used TridentX 2400CL10 modules in it, not worth the higher voltage.

    I'm going to upgrade finally next year. CXL and DDR5 will finally retire this diamond in the rough.

    Retest with nothing in the BIOS changed except the eDRAM multiplier to 20 and see what happens.
  • Notmyusualid - Wednesday, November 4, 2020 - link

    I usually run my Broadwell at 4.4GHz 24/7. However I have a failed bios battery so using the m/b default 4.0GHz overclock settings today. I don't let mine idle at low speeds, its High Performance mode only & I only boot the Desktop for gaming, or Software Define Radio. Both of which want GHz.

    Memory is Vengeance LED 3200MHz (CL15 & only stable at 3000MHz, XMP is not stable either), and 32GB is currently installed.

    Given;
    C:\Windows\System32>winsat mem
    Windows System Assessment Tool
    > Running: Feature Enumeration ''
    > Run Time 00:00:00.00
    > Running: System memory performance assessment ''
    > Run Time 00:00:05.45
    > Memory Performance 54386.55 MB/s
    > Total Run Time 00:00:06.65

    I think that is why my Broadwell missed out on any eDRAM - it wasn't necessary.

    Dolphin runs about 35x seconds, as I remember it.

    6950X running cool in 2020...
  • MrCommunistGen - Monday, November 2, 2020 - link

    HA. Epic timing. Just starting to read this now, but I recently built a system with a Broadwell-based Xeon E3 chip I got for cheap on eBay. Mostly just because I wanted to play with a chip that had eDRAM and the price of entry for an i5 or i7 has remained pretty high.

    This will be a very interesting read!
  • alufan - Monday, November 2, 2020 - link

    News all day as long as its about Intel so it seems on here said it before and have seen nothing since to change my mind
  • Leeea - Monday, November 2, 2020 - link

    great review

    sadly i7-5775C's are still selling for $100+ on ebay. Not quite worth the upgrade over the i7-4790K, with graphics cards continuing to be by far the largest factor.

    But to me it also shows there is no need to jump into the latest and greatest cpu, because these old cpus are still keeping up just fine.
  • plonk420 - Monday, November 2, 2020 - link

    > sadly i7-5775C's are still selling for $100+ on ebay

    ohhhh, that makes me curious as to how they compare to 3100/3300X chips now
  • Roy2002 - Monday, November 2, 2020 - link

    So the conclusion is Optane could play a big role in future?
  • Leeea - Monday, November 2, 2020 - link

    no.

    Optane is slower then normal RAM.

    Optane is a faster more limited version of an SSD. Specifically it has RAM like read performance in some areas, while having SSD like write performance in other areas.
  • Jorgp2 - Monday, November 2, 2020 - link

    SSDs are much slower than Optane in writes.

    The worst case performance for Optane is better than the best performance for an SSD in writes.
  • FunBunny2 - Monday, November 2, 2020 - link

    "The worst case performance for Optane is better than the best performance for an SSD in writes."

    may haps Optane will optimize when used with code compiled to use only memory-to-memory execution and no hard I/O?
  • Tomatotech - Monday, November 2, 2020 - link

    I would have loved to see Intel embed a couple of gig of Optane on every mobo or in every CPU - at scale it would have been cheap - and we would get the benefits of instant app start, damn fast reboot etc. That would make a bigger difference to the end user experience than 15% on benchmarks. But no, it came out with poorly implemented tiering software, via expensive almost unused add-in cards. Optane had so much mass-market potential, sadly I think it’s screwed now for use outside the datacentre. Intel of all people should know how tiered storage works, why did they screw it up so badly? They even had a shining example in Apple’s Fusion drive to follow (copy) but still messed it up.
  • Jorgp2 - Monday, November 2, 2020 - link

    Have you considered asking supermicro for a skylake GT4e review sample?
  • f00f - Monday, November 2, 2020 - link

    That's intel's vision of "embedded" DRAM which is only a kind of embedded, because it is on a separate die. If you look for a proper implementation, look at POWER7 processor (2010) with L3 as eDRAM on the same die as cores.
  • jospoortvliet - Wednesday, November 4, 2020 - link

    I am a bit surprised amd didn't embed 32 or 64mb memory in the i/o chip... that would probably be relatively easy and affordable.
  • brucethemoose - Monday, November 2, 2020 - link

    Is HBM2e access latency really lower than DDR4/5?

    I cant find any timing info or benchmarks, but my understanding is that its lower than GDDR6, which already has much higher latency than DDR4.
  • PeachNCream - Monday, November 2, 2020 - link

    I'd like to say thanks for this review! I really love the look backwards at older hardware in relationship to modern systems. It really shows that in processor power terms that Broadwell/Haswell remain fairly relevant and the impact of eDRAM (or non-impact in various workloads) makes for really interesting reading.
  • brucethemoose - Monday, November 2, 2020 - link

    Another possibility: the "Radeon Cache" on an upcoming APU acts as a last level cache for the entire chip, just like Apple (and Qualcomm?) SoCs.

    Theres no extra packaging costs, no fancy 2nd chip, and it would save power.
  • Jorgp2 - Monday, November 2, 2020 - link

    You do realize that Intel has had that about as long as they've had GPUs on their CPUs right?
  • brucethemoose - Monday, November 2, 2020 - link

    You mean the iGPUs share L3?

    Well, its wasn't a particularly large cache or powerful GPU until Broadwell came around.
  • Jorgp2 - Tuesday, November 3, 2020 - link

    >Well, its wasn't a particularly large cache or powerful GPU until Broadwell came around.

    Larger than the caches on even AMDs largest GPUs until recently.

    Hawaii had a 4MB cache, Vega had a 6MB I believe.
  • eastcoast_pete - Monday, November 2, 2020 - link

    Thanks Ian, great article! Regarding a large, external L4 Cache: any guess on how speed and latency of eDRAM made in more modern silicon would compare with Broadwell's 22 nm one? Let's say if made in Intel's current 14 nm (++ etc)? And, if that'll speed it up enough to make it significantly better than current fast DDR4, would that be a way for Intel to put some "electronic nitrous" on its Tiger Lake and Rocket Lake chips? Because they do need something, or they'll get spanked badly by the new Ryzens.
  • brucethemoose - Monday, November 2, 2020 - link

    I'm guessing most of the latency comes from the travel between the chips, not from the speed of the eDRAM itself. So a shrink wouldnt help much, but EMIB might?

    There is talk of replacing on-chip SRAM in L3 cache with eDRAM, kind of like what IBM already does. So basically, its a size vs speed tradeoff, which is very interesting indeed.
  • quadibloc - Monday, November 2, 2020 - link

    Well, AMD seems to think it was a good idea, given the 128 MB Infinity Cache on their latest graphics cards...
  • Leeea - Monday, November 2, 2020 - link

    Close, but not quite the same.

    AMD has their infinity cache in the GPU die. One piece of silicon for the whole thing. This may have faster I/O and less power consumption.

    Intel's eDRAM caches were separate a separate piece of silicon all together.
  • Khenglish - Monday, November 2, 2020 - link

    The infinity cache is SRAM, which will be faster but much lower density. Only IBM ever integrated DRAM on the same die as a processor. The DRAM capacitor takes up the space where you want to put all your CPU wiring.
  • Quantumz0d - Monday, November 2, 2020 - link

    Always thought why Intel is so fucking foolish in making that shitty iGPU die instead of making eDRAM on the chip. It would have given a massive boost for all their CPUs. A big missed opportunity. AMD had this "Game cache" on their Zen 2 and now with RDNA2, "Infinity Cache" again..
  • jospoortvliet - Wednesday, November 4, 2020 - link

    I guess they did the math on cost and power. They always had better memory controllers and prefetchers so they didn't benefit as much from cache- they also have the memory controller on-die, unlike amd with their i/o die. So intel would benefit waaaay less than amd does, in almost every way.
  • dragosmp - Monday, November 2, 2020 - link

    "...the same 22nm eDRAM chip is still in use today with Apple's 2020 base Macbook Pro 13"

    Ahem, what? Is that CPU an off the roadmap Tiger Lake?
  • Jorgp2 - Monday, November 2, 2020 - link

    Tiger Lake doesn't have the hardware for an L4.

    It's probably the Skylake version
  • colinisation - Monday, November 2, 2020 - link

    Do the part numbers on Intel CPU's mean anything, I picked up a 5775C a week agoand have not installed it yet but the part number starts "L523" - I just assume it is a later batch than what is in the review.
  • ilt24 - Tuesday, November 3, 2020 - link

    @colinisation

    That L523 are the first 4 characters of the is the Finished Process Order or Batch#.

    The L says it was packaged in Malaysia
    The 5 says it was packaged in 2015
    The 23 says it was packaged on the 23rd week

    Digits 5-8 are the specific lot number number of the wafer the die came from
  • colinisation - Tuesday, November 3, 2020 - link

    @ilt24 - Thank you very much
  • Mday - Monday, November 2, 2020 - link

    I expected more eDRAM implementations after Broadwell coming from Intel and AMD on the CPU side, as a low latency - high "capacity" cache, particularly after the launch of HBM. It made me wonder why Intel even bothered, or what shifts in strategies moved them to and away from eDRAM.
  • ichaya - Monday, November 2, 2020 - link

    This is really the first desktop part I'm hearing of, weren't most of these "Iris Pro" chips sold in Apple laptops with maybe a small minority being sold by other laptop OEMs? I believe so.
  • krowes - Monday, November 2, 2020 - link

    CL22 memory for the Ryzen setup? Makes absolutely no sense.
  • Ian Cutress - Tuesday, November 3, 2020 - link

    That's JEDEC standard.
  • Khenglish - Monday, November 2, 2020 - link

    Was anyone else bothered by the fact that Intel's highest performing single thread CPU is the 1185G7, which is only accessible in 28W tiny BGA laptops?

    Also the 128mb edram cache does seem to make on average a 10% improvement over the edramless 4790S at the same TDP. I would love to see edram on more cpus. It's so rare to need more than 8 cores. I'd rather have 8 cores with edram than 16+ cores and no edram.
  • ichaya - Monday, November 2, 2020 - link

    There's definitely a cost trade-off involved, but with an I/O die since Zen 2, it seems like AMD could just spin up a different I/O die, and justify the cost easily by selling to HEDT/Workstation/DC.
  • Notmyusualid - Wednesday, November 4, 2020 - link

    Chalk me up as 'bothered'.
  • zodiacfml - Monday, November 2, 2020 - link

    Yeah but Intel is about squeezing the last dollar in its products for a couple of years now.
  • Endymio - Monday, November 2, 2020 - link

    CPU register-> 3 levels of cache -> eDRAM -> DRAM -> Optane -> SSD -> Hard Drive.

    The human brain gets by with 2 levels of storage. I really don't feel that computers should require 9. The entire approach needs rethinking.
  • Tomatotech - Tuesday, November 3, 2020 - link

    You remember everything without writing down anything? You remarkable person.

    The rest of us rely on written materials, textbooks, reference libraries, wikipedia, and the internet to remember stuff. If you jot down all the levels of hierarchical storage available to the average degree-educated person, it's probably somewhere around 9 too depending on how you count it.

    Not everything you need to find out is on the internet or in books either. Data storage and retrieval also includes things like having to ask your brother for Aunt Jenny's number so you can ring Aunt Jenny and ask her some detail about early family life, and of course Aunt Jenny will tell you to go and ring Uncle Jonny, but she doesn't have Jonny's number, wait a moment while she asks Max for it and so on.
  • eastcoast_pete - Tuesday, November 3, 2020 - link

    You realize that the closer the cache is to actual processor speed, the more demanding the manufacturing gets and the more die area it eats. That's why there aren't any (consumer) CPUs with 1 or more MB of L1 Cache. Also, as Tomatotech wrote, we humans use mnemonic assists all the time, so the analogy short-term/long-term memory is incomplete. Writing and even drawing was invented to allow for longer-term storage and easier distribution of information. Lastly, at least IMO, it boils down to cost vs. benefit/performance as to how many levels of memory storage are best, and depends on the usage scenario.
  • Oxford Guy - Monday, November 2, 2020 - link

    Peter Bright of Ars in 2015:

    "Intel’s Skylake lineup is robbing us of the performance king we deserve. The one Skylake processor I want is the one that Intel isn't selling.

    in games the performance was remarkable. The 65W 3.3-3.7GHz i7-5775C beat the 91W 4-4.2GHz Skylake i7-6700K. The Skylake processor has a higher clock speed, it has a higher power budget, and its improved core means that it executes more instructions per cycle, but that enormous L4 cache meant that the Broadwell could offset its disadvantages and then some. In CPU-bound games such as Project Cars and Civilization: Beyond Earth, the older chip managed to pull ahead of its newer successor.

    in memory-intensive workloads, such as some games and scientific applications, the cache is better than 21 percent more clock speed and 40 percent more power. That's the kind of gain that doesn't come along very often in our dismal post-Moore's law world.

    Those 5775C results tantalized us with the prospect of a comparable Skylake part. Pair that ginormous cache with Intel's latest-and-greatest core and raise the speed limit on the clock speed by giving it a 90-odd W power envelope, and one can't help but imagine that the result would be a fine processor for gaming and workstations alike. But imagine is all we can do because Intel isn't releasing such a chip. There won't be socketed, desktop-oriented eDRAM parts because, well, who knows why.

    Intel could have had a Skylake processor that was exciting to gamers and anyone else with performance-critical workloads. For the right task, that extra memory can do the work of a 20 percent overclock, without running anything out of spec. It would have been the must-have part for enthusiasts everywhere. And I'm tremendously disappointed that the company isn't going to make it."

    In addition to Bright's comments I remember Anandtech's article that showed the 5675C beating or equalling the 5775C in one or more gaming tests, apparently largely due to the throttling due to Intel's decision to hobble Broadwell with such a low TDP.
  • Jorgp2 - Monday, November 2, 2020 - link

    Why didn't he buy it then?

    There was even a many lake refresh
  • Oxford Guy - Wednesday, November 4, 2020 - link

    He didn’t buy a desktop high TDP Skylake with EDRAM because it was never produced. Intel decided to sell less for more, which it could so safely since our capitalist system is mainly about a near-total lack of competition far more often (in tech at least) than anything else. Read for comprehension.
  • Oxford Guy - Monday, November 2, 2020 - link

    So, the take-away here is that Intel was heavily sandbagging — not bothering to take advantage of the benefit EDRAM provides (for gaming especially).

    $10 worth of parts and gamers were expected for fork over big money for less performance.

    Hooray for lack of competition.
  • Nictron - Tuesday, November 3, 2020 - link

    My i7 5775c died last week after 4.5 years of service at OC 4.1-4.2 Ghz. Now seeing this review I am quite sad as it could’ve gone a bit more.

    Upgraded to R5 3600XT for now and can always go 5000 series in future on the X570 platform.

    Hope competition stays strong!
  • Oxford Guy - Wednesday, November 4, 2020 - link

    You can hope or you can look at the facts. It hasn’t been strong. That’s why Intel was able to sandbag so extremely.
  • alufan - Tuesday, November 3, 2020 - link

    The Intel skew on this site is getting silly its becoming an Intel promo machine!
    You Benchmark but like many Laptop providers you hamstring the AMD CPU with the worst and slowest components, we all know Ryzen CPUs work best with fast RAM in fact you have stated so yourselves in the past on this very site yet you now choose to test the Ryzen option with CL22 bargain basement RAM..... makes me wonder, how much did Intel pay for this review of a 5 year old CPU just to keep the Blue option at the top of the page, Anandtech is a shameful parody of a neutral review site and frankly the owners and editors have exchanged integrity for well whatever you want to call it, shame on you
  • Ian Cutress - Tuesday, November 3, 2020 - link

    I test at JEDEC speeds. For Ryzen that's DDR4-3200, and JEDEC subtimings are CL22. If you want to complain, complain to JEDEC to ask for something faster, or ask companies to validate memory controllers beyond JEDEC standards. Otherwise it's overclocking, and if we're overclocking, then who cares about core frequency or power anyway.

    https://www.youtube.com/watch?v=jQe5j7xIcog - I even did a video on it.

    I do tons of AMD coverage. Year in year out interviews of CEO and CTO of AMD, but no equivalent of Intel. Deep dives into every major architecture, with analysis of instruction performance as well as cache hierarchy. Reviews of almost every single Ryzen product, as we're sampled most of them. If we were that big of an Intel shill, why does AMD supply us what we ask for?
  • alufan - Tuesday, November 3, 2020 - link

    Ahh so the Ram kit you used for the Intel test just happened to fall into the slots then with its CL16 is that JEDEC standard?

    I dont want any particular preference shown to either Brand I will admit however to not liking Intel due to former poor experience but I think its important when charts and such are published that a level field is used, because at some point somebody may well use those published figures as a illustration of one products superiority over another, Intel is very good at doing just that, and I will cite the last Thread ripper release for that one just so they could have a set of tables showing the Intel product on the Top even if it was for a few hours.
  • qwertymac93 - Tuesday, November 3, 2020 - link

    Where did you find the latency settings the tests were performed at? I didn't see the latencies mentioned in the test setup page.
  • alufan - Tuesday, November 3, 2020 - link

    "Where did you find the latency settings the tests were performed at? I didn't see the latencies mentioned in the test setup page."

    look up the parts on the makers website
  • Billy Tallis - Wednesday, November 4, 2020 - link

    Ian already said he tests at JEDEC speeds, which includes the latency timings. Using modules that are capable of faster timings does not prevent running them at standard timings.
  • Quantumz0d - Tuesday, November 3, 2020 - link

    Don't even bother Ian with these people.
  • Nictron - Wednesday, November 4, 2020 - link

    I appreciate the review and context over a period of time. Having a baseline comparison is important and it is up to us the reader to determine the optimal environment we would like to invest in. As soon as we do the price starts to skyrocket and comparisons are difficult.

    Reviews like this also show that a well thought out ecosystem can deliver great value. Companies are here to make money and I appreciate reviewers that provide baseline compatible testing over time for us to make informed decisions.

    Thank you and kind regards,
  • GeoffreyA - Tuesday, November 3, 2020 - link

    Thanks, Ian. I thoroughly enjoyed the article and the historical perspective especially. And the technical detail: no other site can come close.
  • eastcoast_pete - Tuesday, November 3, 2020 - link

    Ian, could you comment on the current state of the art of EDRAM? How fast can it be, how low can the latency go? Depending on those parameters and difficulty of manufacturing, there might be a number of uses that make sense.
    One where it could is to possibly allow Xe graphics to use cheaper and lower power LPDDR-4 or -5 RAM without taking a large performance hit vs. GDDR6. 128 or 256 MB EDRAM cache might just do that, and still keep costs lower. Pure speculation, of course.
  • DARK_BG - Tuesday, November 3, 2020 - link

    Hi , what I'm wondering is where the 30% gap between the 5770C and 4790K in Games came from , compared to your original review and all other reviews out there of 5770C. Since I'm with a Z97 platform and 4.0GHz Xeon , moving to 4770k or 4790K doesn't make any sense given their second hand prices but 5770C on this review makes alot of sense.

    So is it the OS,the drivers , some BIOS settings or on the older reviews the systems were just GPU limited failing to explore the CPU performance?
  • jpnex - Friday, January 8, 2021 - link

    Lol, no, the I7 5775c is just stronger than an i7 4790k, this is a known fact. Other benchmarks show the same thing. Old benchmarks don't show It because back then people didn't know that deactivating the iGPU would give a performance boost.
  • DARK_BG - Wednesday, July 20, 2022 - link

    I forgot back then to reply back , based on this review I've sourced 5775C (for a little less than 100$ this days going for 140-150$) coupled with Asus Z97 Pro and after some tweaking (CPU at 4.1GHz , eDRAM at 2000MHz and some other minor stuff that I already forgot) the difference compared to the Xeon 4.0GHz in games was mind blowing.Later I was able to source and 32GB Corsair Dominator DDR3 2400MHz CL10 just for fun to make it top spec config. :)

    It is a very capable machine but this days I'll swap it for Ryzen 5800X3D to get the final train on the fastest Windows 7 capable gaming system.Yeah i know it is OLD OS but everything I need runs flawessly for more than a decade with only reainstall 7 years ago due to an SSD failure. It is my only personal Intel System for the past 22 years since it was the for a first time the best price performance second hand platform for a moment , all the rest were AMD based and I keep them all in working condition.

    BTW I was able to run Windows XP 64bit on the Z97 platform , I just need to swap the GTX 1070 for GTX 980/980 Ti to be fully functional everything else runs like a charm under XP i was able to hack the driver to install as an GTX 960 so I have a 2D hardware acceleration under XP on GTX 1070 since nvidia havent changed anything in regard to 2D compared to the previous generation
  • dew111 - Tuesday, November 3, 2020 - link

    Rocket lake should have been the comet lake processor with eDRAM. Instead they'll be lucky to beat comet lake at all.
  • erotomania - Tuesday, November 3, 2020 - link

    Thanks, Ian. I enjoyed this article from a NUC8i7BEH that has 128MB of coffee-flavored eDRAM. Also, thanks Ganesh for the recent reminder that Bean > Frost.
  • dsplover - Tuesday, November 3, 2020 - link

    For Digital s Audio applications the i7-5775C @ 3.3GHz was incredible when disabling the Iris GFX turning the cache over to audio, then running s discrete GFX card.

    Bested my i7 4790k’s.
    Tried OC’ing but even with the kick but Supermicro H70 it was unstable as the Ring Bus/L4 would also clock up and choked @ 2050MHz.

    This rig allowed really tight low latency timings and I prayed they would release future designs with a larger cache.
    AMD beat them to to it w/Matisse which was good for 8 core only.

    The new 5000s are going to be Digital Audio dreams @ low wattage.

    Intel just keeps lagging behind.
  • ironicom - Tuesday, November 3, 2020 - link

    fps is irrelevant in civ; turn time and load time are what matter.
  • vorsgren - Tuesday, November 3, 2020 - link

    Thanks for using my benchmark! Hope it was usefull!
  • Nictron - Wednesday, November 4, 2020 - link

    Which benchmark was that?
  • erotomania - Wednesday, November 4, 2020 - link

    Google the username.
  • vorsgren - Wednesday, November 4, 2020 - link

    http://www.bay12forums.com/smf/index.php?topic=173...
  • Oxford Guy - Thursday, November 5, 2020 - link

    "The Intel skew on this site is getting silly its becoming an Intel promo machine!"

    Yes. An article that exposes how much Intel was able to get away with sandbagging because of our tech world's lack of adequate competition (seen in MANY tech areas to the point where it's more the norm than the exception) — clearly such an article is showing Intel in a good light.

    If you were an Intel shareholder.

    For everyone else (the majority of the readers), the article condemns Intel for intentionally hobbing Skylake's gaming performance. ArsTechnica produced an article about this five years ago when it became clear that Skylake wasn't going to have EDRAM.

    The ridiculousness of the situation (how Intel got away with charging premium prices for horribly hobbled parts — $10 worth of EDRAM missing, no less) really shows the world's economic system particularly poorly. For all the alleged capitalism in tech, there certainly isn't much competition. That's why Intel didn't have to ship Skylake with EDRAM. Monopolization (and near-monopoly) enables companies to do what they want to do more than anything else: sell less for more. As long as regulators are toothless and/or incompetent the situation won't improve much.
  • erikvanvelzen - Saturday, November 7, 2020 - link

    Ever since the Pentium 4 Extreme Edition I've wondered why intel does not permanently offer a top product with a large L3 or L4 cache.
  • abufrejoval - Monday, November 9, 2020 - link

    Just picked up a NUC8i7BEH last week (quad i7, 48EU GT3e with 128MB eDRAM), because they dropped below €300 including VAT: A pretty incredible value at that price point and extremely compatible with just about any software you can throw at it.

    Yes, Tiger Lake NUC11 would be better on paper and I have tried getting a Ryzen 7-4800U (as PN50-BBR748MD), but I've never heard of one actually shipped.

    It's my second NUC8i7BEH, I had gotten another a month or two previously, while it was still at €450, but decided to swap that against a hexa-core NUC10i7FNH (24EU no eDRAM) at the same price, before the 14-days zero-cost return period was up. GT3e+quad-core vs. GT2+hexa-core was a tough call to make, but acutally both run really mostly server loads anyway. But at €300/quad vs €450/hexa the GT3e is quite simply for free, when the silicon die area for the GT3e/quad is in all likelyhood much greater than for the GT2/hexa, even without counting the eDRAM.

    My Whiskey-lake has 200MHz less top clock than the Comet-lake, but that doesn't show in single core results, where the L4 seems to put Whiskey consistently into a small lead.

    GT3e doesn't quite manage to double graphics performance over GT2, but I am not planning to use either for gaming. Both do fairly well at 4k on anything 2D, even Google Map's 3D renders do pretty well.

    BTW: While Google Earth Pro's Flight simulator actually gives a fairly accurate representation of the area where I live, it doesn't do great on FPS, even with an Nvidia GPU. By contrast Microsoft latest and greatest is a huge disappointment when it comes to terrain accuracy (buildings are pure fantasy, not related at all to what's actually there), but delivers ok FPS on my RTX2080ti. No, I didn't try FlightSim on the NUCs...

    However, the 3D rendering pipeline Google has put into the browser variant of Google Maps, beats the socks off both Google Earth Pro and Microsoft Flight: With Chrome leading over Firefox significantly, the 3D modelled environment is mind-boggling even on the GT2 at 4k resolutions, it's buttery smooth on GT3e. A browser based flight simulator might actually give the best experience overall, quite hard to believe in a way.

    It has me appreciate how good even iGPU graphics could be, if code was properly tuned to make do with what's there.

    And it exposes just how bad Microsoft Flight is with nothing but Bing map data unterneath: Those €120 were a full waste of money, but I just saved those from buying the second NUC8 later.
  • mrtunakarya - Wednesday, December 9, 2020 - link

    <a href="https://www.mrtunakarya.com/?m=1">Nice<...

Log in

Don't have an account? Sign up now