Graphics Performance Comparison

With the background and context of the benchmark covered, we now dig into the data and see what we have to look forward to with DirectX 12 game performance. This benchmark has preconfigured batch files that will launch the utility at either 3840x2160 (4K) with settings at ultra, 1920x1080 (1080p) also on ultra, or 1280x720 (720p) with low settings more suited for integrated graphics environments.

Fable Legends Beta: 3840x2160 Ultra, Core i7

Fable Legends Beta: 3840x2160 Ultra, Core i5

Fable Legends Beta: 3840x2160 Ultra, Core i3

When dealing with 3840x2160 resolution, the GTX 980 Ti has a single digit percentage lead over the AMD Fury X, but both are above the bare minimum of 30 FPS no matter what the CPU.

Fable Legends Beta: 1920x1080 Ultra, Core i7

Fable Legends Beta: 1920x1080 Ultra, Core i5

Fable Legends Beta: 1920x1080 Ultra, Core i3

When dealing with the i5 and i7 at 1920x1080 ultra settings, the GTX 980 Ti still has that single digit percentage lead, but at Core i3 levels of CPU power the difference is next to zero, suggesting we are CPU limited even though the frame difference from i3 to i5 is minimal. If we look at the range of cards under the Core i7 at this point, the interesting thing here is that the GTX 970 just about hits that 60 FPS mark, while some of the older generation cards (7970/GTX 680) would require compromises in the settings to push it over the 60 FPS barrier at this resolution. The GTX 750 Ti doesn’t come anywhere close, suggesting that this game (under these settings) is targeting upper mainstream to lower high end cards. It would be interesting to see if there is an overriding game setting that ends up crippling this level of GPU.

Fable Legends Beta: 1280x720 Low, Core i7

Fable Legends Beta: 1280x720 Low, Core i5

Fable Legends Beta: 1280x720 Low, Core i3

At the 720p low settings, the Core i7 pushes everything above 60 FPS, but you need at least an AMD 7970/GTX 960 to start going for 120 FPS if only for high refresh rate panels. We are likely being held back by CPU performance as illustrated by the GTX 970 and GTX 980 Ti being practically tied and the R9 290X stepping ahead of the pack. This makes it interesting when we consider integrated graphics, which we might test for a later article.  It is worth noting that at the low resolution, the R9 290X and Fury X pull out a minor lead over the NVIDIA cards. The Fury X expands this lead with the i5 and i3 configurations, just rolling over to the double digit percentage gains.

Fable Legends Early Preview: DirectX 12 Benchmark Analysis CPU Scaling
Comments Locked

141 Comments

View All Comments

  • anubis44 - Friday, October 30, 2015 - link

    The point is not whether you use DP, the point is that the circuitry is now missing, and that's why Maxwell uses less power. If I leave stuff out of a car, it'll be lighter, too. Hey look! No back seats anymore, and now it's LIGHTER! I'm a genius. It's not because nVidia whipped up a can of whoop-ass, or because they have magic powers, it's because they threw everything out of the airplane to make it lighter.
  • anubis44 - Friday, October 30, 2015 - link

    And left out the hardware based scheduler, which will bite them in the ass for a lot of DX12 games that will need this. No WAIT! nVidia isn't screwed! They'll just sell ANOTHER card to the nVidiots who JUST bought one that was obsolete, 'cause nVidia is ALWAYS better!
  • Alexvrb - Thursday, September 24, 2015 - link

    Not every game uses every DX12 feature, and knowing that their game is going to run on a lot of Nvidia hardware makes developers conservative in their use of new features that hurt performance on Nvidia cards. For example, as long as developers are careful with async compute and you've got plenty of CPU cycles, I think everything will be fine.

    Now, look at the 720p results. Why the change in the pecking order? Why do AMD cards increase their lead as CPU power falls? Is it a driver overhead issue - possibly related to async shader concerns? We don't know. Either way it might not matter, an early benchmark isn't even necessarily representative of the final thing, let alone a real-world experience.

    In the end it will depend on the individual game. I don't think most developers are going to push features really hard that kill performance on a large portion of cards... well not unless they get free middleware tools and marketing cash or something. ;)
  • cityuser - Sunday, September 27, 2015 - link

    quite sure it's nvidia again do some nasty work with the game company that descale the performance of AMD card !!!
    Look at where the nvidia cannot corrupt, futuremark's benchmark tells another story!!!
  • Drumsticks - Thursday, September 24, 2015 - link

    As always, it's only one data point. It was too early to declare AMD a winner then, but it's still too early to say they aren't actually going to benefit more from DX12 than Nvidia. We need more data to say for sure either way.
  • geniekid - Thursday, September 24, 2015 - link

    That's crazy talk.
  • Beararam - Thursday, September 24, 2015 - link

    Maybe not ''vastly superior'', but the gains in the 390x seem to be greater than those realized in the 980. Time will tell.

    https://youtu.be/_AH6pU36RUg?t=6m29s
  • justniz - Thursday, September 24, 2015 - link

    Such a large gain only on AMD just from DX12 (i.e. accessing the GPU at a lower level and bypassing AMD driver's DX11 implementation) is yet more evidence that AMD's DX11 drivers are much more of a bottleneck than nVidia's.
  • Gigaplex - Thursday, September 24, 2015 - link

    That part was pretty obvious. The current question is, how much of a bottleneck. Will DX12 be enough to put AMD in the lead (once final code starts shipping), or just catch up?
  • lefty2 - Thursday, September 24, 2015 - link

    I wonder if they were pressurized not to release any benchmark that would make Nvidia look bad, similiar to the way they did in ashes of the singularity

Log in

Don't have an account? Sign up now