Comments Locked

30 Comments

Back to Article

  • Arbie - Monday, March 23, 2020 - link

    Is this for Turing only? Or Turing and Pascal?
  • edzieba - Monday, March 23, 2020 - link

    Pascal has no Tensor cores, so Turing only. Will probably not be backported for the handful of nutters gaming on GV100 either.
  • Dragonstongue - Monday, March 23, 2020 - link

    "technically" does not require Tensor cores as AMD is "able" to do raytracing as well, obviously they not have Tensor at all, unless they got the "go ahead" from Ngreedia to use when it comes time for PS5 Xbox S Next etc.. which I highly doubt.

    That being said, NV has never been in the habit of "back" support old product with newest features, would cut into their sales of the more $$$$ product lining pockets (they are in a business, I get it, but still) Pascal "overall" good-great performance given power use, Turing for the most part took quite the hit power use wise (though faster a chunk) from these specialized Tensor cores.

    odd they could not figure out how to make the "shaders" be able to do this stuff without requiring specialized "units" to do this, might have kept power use more manageable..something along those lines anyways??
  • Unashamed_unoriginal_username_x86 - Monday, March 23, 2020 - link

    This hurts to read. Regardless, RT cores are distinct from Tensor cores. More hierarchy stuff, or something.
  • yeeeeman - Monday, March 23, 2020 - link

    DLSS is implemented using Tensor Cores not ray tracing cores. So no, AMD can't do shit about it.
  • olde94 - Monday, March 23, 2020 - link

    i mean, it's not like amd can't do FP16 or FP8 opperations on the GPU. Just not as efficiently
  • Santoval - Monday, March 23, 2020 - link

    There is no such thing as "FP8" and I don't think there can be, due to the way floating point arithmetic is structured (significands, exponents, signs etc). For 8 bits and 4 bits there are INT8 and INT4 respectively.
  • londedoganet - Monday, March 23, 2020 - link

    8-bit floating point numbers are absolutely a thing. And they have already been trialed in deep learning: https://papers.nips.cc/paper/7994-training-deep-ne...
  • olde94 - Monday, March 23, 2020 - link

    Hey! you know what he says! "The more you buy..... The more you save! Exactly!"
  • CiccioB - Monday, March 23, 2020 - link

    <quote>"technically" does not require Tensor cores</quote>
    <quote>as AMD is "able" to do raytracing as well</quote>
    <quote>unless they got the "go ahead" from Ngreedia</quote>

    Tensor core exchanged for RT core, speaking about raytracing when nothing in the article is inherent to it, calling Nvidia Ngreedia just to give a bit more negative salt to the senseless speech..

    Oh, AMD fanboys, how ignorant they can be and how many stupid things they can get out of their *ss just to try to throw sh!t to the competition which is more than 2 years ahead!
  • vader3d - Monday, March 23, 2020 - link

    Oh, Nvidia fanboys, how ignorant they can be and how many stupid things they can get out of their *ss just to try to throw sh!t to the competition which is true does have a head start in GPU tech. Let me tell you secret. Intel, meet Ryzen.

    So while Intel has more than 10 years lead with the Core series, all it take is some research and development and tables could turn. Let's this GPU war play out this Oct as rumor has it is when AMD will show big Navi. We have some inkling of what it may look like in Xbox Series X.
  • Qasar - Tuesday, March 24, 2020 - link

    dont waste your time with cicciob, vader3d, he hates amd with a passion, look at most of his other posts, and will just insult you and resort to name calling
  • CiccioB - Tuesday, March 24, 2020 - link

    It's not a question if hating, it's a question of numbers and facts.
    You are AMD lovers that do not considers those because they hurt your feelings.
    But the reality is just different from what you think it is.

    Ah, I'm an old AMD shareholder. Yes, a real hater. Poor AMD fanboys.
  • CiccioB - Tuesday, March 24, 2020 - link

    Interesting point of view, despite the fact that Nvidia has not problem at accessing advanced process nodes unlike Intel and that there's not a single technological advantage that AMD is going to use that Nvidia cannot.
    When this Oct Big Navi will be released Nvidia will be with Ampere, an architecture made on 7nm (or even better) which accumulates the experience done with Turing, starting from a very solid architecture that is very power efficient.

    You AMD fanboys are famous to be constantly in hope that AMD is going to release a killer GPUs.
    first version of GCN, then its revisiones Fiji, Polaris, Vega and now RDNA where all announces as killer GPUs at their times (oh, the #poorvolta joke) and you are still here to say that in 6 months AMD will release something that will disrupt the competition.

    You have not understood that AMD power is not in its transistors (where it is 3 years behind) but in the console games support that are tailored to its own HW specifications which giges them a big advantage.
    Lost that, AMD would not compete even with Intel when it will enter, and see that Intel will provide the biggest GPUs market and has a lot of experience in providing support tools it will not need to have the best tech to gather support to them.
    This put Nvidia in a bad position and that is why they lead the market with top HW.
    Accept that and continue to hope for the future killer GPU from AMD.. maybe when the'll use 3nm and Nvidia will remain at 7nm to spare some bucks on R&D.
  • Qasar - Tuesday, March 24, 2020 - link

    yea what ever, arrogant cicciob amd hater, get over your self, and get off your high horse
  • Yojimbo - Monday, March 23, 2020 - link

    I assume that not all Turing cards can support this. If it really uses the tensor cores it should be RTX only because GTX Turing (16 series) cards don't have tensor core support.
  • CiccioB - Monday, March 23, 2020 - link

    Yes, as you cab see the slide shows till 2060 performances and nothing else below.
    For this to work Tensor core are needed.

    But I expect AMD to come up with a surrogate that can do 1/10 of the quality at half the speed and claim it can do the same without useless tensor cores, and warm up AMD fanboys heart for a couple of months.
  • Desierz - Monday, March 23, 2020 - link

    I can't say I see any difference between the on/off Mechwarrior DLSS example. Must be my eyes..
  • qap - Monday, March 23, 2020 - link

    That's the point. DLSS is rendering in lower resolution and upscaling. If you can't see any difference, then they have done great job (obiously it must also apply in games), because then it is free performance.
    Btw - hardware unboxed already did "DLSS 2.0" testing roughly one month ago and in wolfenstein (only title with true dlss 2.0 at that time) it was impressive. Control is really interesting title also, because it originaly had something in between original dlss and the new one.
  • schujj07 - Monday, March 23, 2020 - link

    I can say that I see a difference between the two images and on the stills it is a bit apparent. The textures are more muted and the image isn't as sharp. During game play I doubt it will be as noticeable though.
  • owan - Monday, March 23, 2020 - link

    The difference is that the left one is 71fps and the right is 95fps at equal quality thanks to DLSS
  • maroon1 - Monday, March 23, 2020 - link

    wolfenstein youngblood uses DLSS 2.0 and it works very well according to digital foundry and hardware unboxed
  • emilemil1 - Monday, March 23, 2020 - link

    This could be a killer addition to the dynamic resolution option that many games use to combat frame drops during demanding sections.
  • name99 - Monday, March 23, 2020 - link

    So this is basically smart sharpening (or artificially creating 4K out of 1080p). Which is fine.
    Along the same lines one could imagine creating synthetic frames, and so artificially creating 120fps out of 60fps source. Again fine.

    My question, however, is is this what gamers really want?
    I'm not a gamer, but I'd have to say, based on watching movie/TV type content my priorities for "improving" content (always assuming the improvement engine actually works!) would be
    (a) UHD
    (b) 120 fps
    (c) 4K upscaling.

    UHD seems to me the constantly highest value item. Every time you see it, it pops, and it never gets old. 120fps (and removal of judder when panning) is second highest value.
    4K upscaling, I gotta say is a distant third. Maybe it's because my eyes are getting old, but it's something you only really notice when you freeze the image, or look closely at the pixels of content (rather than the content itself).

    So (to me anyway) it looks like nV are attacking what may be the easiest problem, but is not the highest value problem. Would gamers agree, or they do agree with nV's priorities?
  • name99 - Monday, March 23, 2020 - link

    Sorry, HDR above, not UHD!
    I'm growing senile as I grow older!!!
  • Yojimbo - Monday, March 23, 2020 - link

    Gamers want the best experience they can get with the computation power they have. The efforts to push that forward will fan outward to more areas and increase in complexity until it gets harder and harder to make improvements. So HDR is one possible area of improvement, smart upscaling is another, variable rate shading is another, ray tracing is another, etc. Advance and embrace of one doesn't exclude the others. The thing with HDR, though, is that it is expensive and so the market for it will remain small for some time, limiting developer support. In fact, I believe the GPU makers already have support for it in place.
    I remember seeing PR announcements for it years ago. It seems like every year they are promoting gaming HDR monitors, but these things cost over a thousand dollars. So I don't think it's accurate to say NVIDIA is prioritizing one thing over the other. Each thing has its own costs and benefits and its own market, and NVIDIA is serving them both. It's just a matter of developers taking advantage of what's made available. Developers aren't likely to put much effort into HDR until there are a lot more HDR monitors out there. There are much more RTX cards out there and there will assuredly be a rapidly increasing number of DLSS 2.0 capable cards in gamers hands going forward.
  • brucethemoose - Monday, March 23, 2020 - link

    On 120 FPS content, look up DAIN. Apparently the depth buffer is useful for motion interpolation, and it theoretically wouldnt have to be generated in games.
  • whatthe123 - Monday, March 23, 2020 - link

    they don't have any control over whether or not you buy HDR so I don't see how they could make HDR a priority.

    This "solution" targets both framerate and upscaling, so by your logic they're targeting everything they can actually target.

    I don't know how well this version works but 1.0 didn't work very well in terms of image quality. It had the edge warping and destruction of patterns like other NN image upscalers.
  • zodiacfml - Tuesday, March 24, 2020 - link

    Looks compelling. Combine this with dynamic resolution, there will be little need for VRR.
  • TheJian - Thursday, March 26, 2020 - link

    "The catch to DLSS 2.0, however, is that this still requires game developer integration, and in a much different fashion."
    "None the less, it means that DLSS 2.0 still needs to be integrated on a per-game basis, even if the per-game training is gone."

    https://wccftech.com/nvidia-dlss-2-0-revealed-2x-f...
    "Russ Bullock, President at Piranha Games:
    NVIDIA DLSS 2.0 basically gives our players a free performance boost, without sacrificing image quality. It was also super easy to implement with NVIDIA’s new SDK, so it was a no brainer for us to add it to MechWarrior 5."

    Anandtech left out how easy it is for devs to add it (made to sound far harder than it is). Always taking shots when they can here...LOL. The fact that it is EASY, is a VERY important point here. According to them, so easy it's a no brainer to add this to other games too then right? OF course this sucks for an AMD portal site right (and competition, I admit that, not my problem)? ;) Tough to figure out how to downplay NVidia stuff these days isn't it Ryan? Wait, 1440p is the new enthusiast standard now right? Ah, wait, you said that at 660ti, and still not right. Maybe one day...LOL.

    Great news for NV owners, of which I'll be one likely by xmas again (1070ti last card). 25K+ on NV last week so they get my next card by default at this point...It is FREE. Trade on corona people...ROFL.

Log in

Don't have an account? Sign up now