AMD's Radeon HD 5870: Bringing About the Next Generation Of GPUs
by Ryan Smith on September 23, 2009 9:00 AM EST- Posted in
- GPUs
DirectX11 Redux
With the launch of the 5800 series, AMD is quite proud of the position they’re in. They have a DX11 card launching a month before DX11 is dropped on to consumers in the form of Win7, and the slower timing of NVIDIA means that AMD has had silicon ready far sooner. This puts AMD in the position of Cypress being the de facto hardware implementation of DX11, a situation that is helpful for the company in the long term as game development will need to begin on solely their hardware (and programmed against AMD’s advantages and quirks) until such a time that NVIDIA’s hardware is ready. This is not a position that AMD has enjoyed since 2002 with the Radeon 9700 and DirectX 9.0, as DirectX 10 was anchored by NVIDIA due in large part to AMD’s late hardware.
As we have already covered DirectX 11 in-depth with our first look at the standard nearly a year ago, this is going to be a recap of what DX11 is bringing to the table. If you’d like to get the entire inside story, please see our in-depth DirectX 11 article.
DirectX 11, as we have previously mentioned, is a pure superset of DirectX 10. Rather than being the massive overhaul of DirectX that DX10 was compared to DX9, DX11 builds off of DX10 without throwing away the old ways. The result of this is easy to see in the hardware of the 5870, where as features were added to the Direct3D pipeline, they were added to the RV770 pipeline in its transformation into Cypress.
New to the Direct3D pipeline for DirectX 11 is the tessellation system, which is divided up into 3 parts, and the Computer Shader. Starting at the very top of the tessellation stack, we have the Hull Shader. The Hull Shader is responsible for taking in patches and control points (tessellation directions), to prepare a piece of geometry to be tessellated.
Next up is the tesselator proper, which is a rather significant piece of fixed function hardware. The tesselator’s sole job is to take geometry and to break it up into more complex portions, in effect creating additional geometric detail from where there was none. As setting up geometry at the start of the graphics pipeline is comparatively expensive, this is a very cool hack to get more geometric detail out of an object without the need to fully deal with what amounts to “eye candy” polygons.
As the tesselator is not programmable, it simply tessellates whatever it is fed. This is what makes the Hull Shader so important, as it’s serves as the programmable input side of the tesselator.
Once the tesselator is done, it hands its work off to the Domain Shader, along with the Hull Shader handing off its original inputs to the Domain Shader too. The Domain Shader is responsible for any further manipulations of the tessellated data that need to be made such as applying displacement maps, before passing it along to other parts of the GPU.
The tesselator is very much AMD’s baby in DX11. They’ve been playing with tesselators as early as 2001, only for them to never gain traction on the PC. The tesselator has seen use in the Xbox 360 where the AMD-designed Xenos GPU has one (albeit much simpler than DX11’s), but when that same tesselator was brought over and put in the R600 and successive hardware, it was never used since it was not a part of the DirectX standard. Now that tessellation is finally part of that standard, we should expect to see it picked up and used by a large number of developers. For AMD, it’s vindication for all the work they’ve put into tessellation over the years.
The other big addition to the Direct3D pipeline is the Compute Shader, which allows for programs to access the hardware of a GPU and treat it like a regular data processor rather than a graphical rendering processor. The Compute Shader is open for use by games and non-games alike, although when it’s used outside of the Direct3D pipeline it’s usually referred to as DirectCompute rather than the Compute Shader.
For its use in games, the big thing AMD is pushing right now is Order Independent Transparency, which uses the Compute Shader to sort transparent textures in a single pass so that they are rendered in the correct order. This isn’t something that was previously impossible using other methods (e.g. pixel shaders), but using the Compute Shader is much faster.
Other features finding their way into Direct3D include some significant changes for textures, in the name of improving image quality. Texture sizes are being bumped up to 16K x 16K (that’s a 256MP texture) which for all practical purposes means that textures can be of an unlimited size given that you’ll run out of video memory before being able to utilize such a large texture.
The other change to textures is the addition of two new texture compression schemes, BC6H and BC7. These new texture compression schemes are another one of AMD’s pet projects, as they are the ones to develop them and push for their inclusion in DX11. BC6H is the first texture compression method dedicated for use in compressing HDR textures, which previously compressed very poorly using even less-lossy schemes like BC3/DXT5. It can compress textures at a lossy 6:1 ratio. Meanwhile BC7 is for use with regular textures, and is billed as a replacement for BC3/DXT5. It has the same 3:1 compression ratio for RGB textures.
We’re actually rather excited about these new texture compression schemes, as better ways to compress textures directly leads to better texture quality. Compressing HDR textures allows for larger/better textures due to the space saved, and using BC7 in place of BC3 is an outright quality improvement in the same amount of space, given an appropriate texture. Better compression and tessellation stand to be the biggest benefactors towards improving the base image quality of games by leading to better textures and better geometry.
We had been hoping to supply some examples of these new texture compression methods in action with real textures, but we have not been able to secure the necessary samples in time. In the meantime we have Microsoft’s examples from GameFest 2008, which drive the point home well enough in spite of being synthetic.
Moving beyond the Direct3D pipeline, the next big feature coming in DirectX 11 is better support for multithreading. By allowing multiple threads to simultaneously create resources, manage states, and issue draw commands, it will no longer be necessary to have a single thread do all of this heavy lifting. As this is an optimization focused on better utilizing the CPU, it stands that graphics performance in GPU-limited situations stands to gain little. Rather this is going to help the CPU in CPU-limited situations better utilize the graphics hardware. Technically this feature does not require DX11 hardware support (it’s a high-level construct available for use with DX10/10.1 cards too) but it’s still a significant technology being introduced with DX11.
Last but not least, DX11 is bringing with it High Level Shader Language 5.0, which in turn is bringing several new instructions that are primarily focused on speeding up common tasks, and some new features that make it more C-like. Classes and interfaces will make an appearance here, which will make shader code development easier by allowing for easier segmentation of code. This will go hand-in-hand with dynamic shader linkage, which helps to clean up code by only linking in shader code suitable for the target device, taking the management of that task out of the hands of the coder.
327 Comments
View All Comments
Zool - Sunday, September 27, 2009 - link
The speed of the on chip cache just shows that the external memory bandwith in curent gpus is only to get the data to gpu or recieve the final data from gpu. The raw processing hapenns on chip with those 10 times faster sram cache or else the raw teraflops would vanish.JarredWalton - Sunday, September 27, 2009 - link
If SD had any reading comprehension or understanding of tech, he would realize that what I am saying is:1) Memory bandwidth didn't double - it went up by just 23%
2) Look at the results and performance increased by far more than 23%
3) Ergo, the 4890 is not bandwidth limited in most cases, and there was no need to double the bandwidth.
Would more bandwidth help performance? Almost certainly, as the 5870 is such a high performance part that unlike the 4890 it could use more. Similarly, the 4870X2 has 50% more bandwidth than the 5870, but it's never 50% faster in our tests, so again it's obviously not bandwidth limited.
Was it that hard to understand? Nope, unless you are trying to pretend I put an ATI bias on everything I say. You're trying to start arguments again where there was none.
SiliconDoc - Sunday, September 27, 2009 - link
The 4800 data rate ram is faster vs former 3600 - hence bus width is running FASTER - so your simple conclusions are wrong.When we overlcock the 5870's ram, we get framerate increase - it increases the bandwidth, and up go the numbers.
---
Not like there isn't an argument, because you don't understand tech.
JarredWalton - Sunday, September 27, 2009 - link
The bus is indeed faster -- 4800 effective vs. 3900 on the 4890 or 3600 on the 4870. What's "wrong about my simple conclusions"? You're not wrong, but you're not 100% right if you suggest bandwidth is the only bottleneck.Naturally, as most games are at least partially bandwidth limited, if you overclock 10% you increase performance. The question is, does it increase linearly by 10%? Rarely, just as if you overclock the core 10% you usually don't get 10% boost. If you do get a 1-for-1 increase with overclocking, it indicates you are solely bottlenecked by that aspect of performance.
So my conclusions still stand: the 5870 is more bandwidth limited than 4890, but it is not completely bandwidth limited. Improving the caches will also help the GPU deal with less bandwidth, just as it does on CPUs. As fast as Bloomfield may be with triple-channel DDR3-1066 (25.6GB/s), the CPU can process far more data than RAM could hope to provide. Would a wider/faster bus help the 5870? Yup. Would it be a win-win scenario in terms of cost vs. performance? Apparently ATI didn't think so, and given how quickly sales numbers taper off above $300 for GPUs, I'm inclined to agree.
I'd also wager we're a lot more CPU limited on 5870 than many other GPUs, particularly with CrossFire setups. I wouldn't even look at 5870 CrossFire unless you're running a high-end/overclocked Core i7 or Phenom II (i.e. over ~3.4GHz).
And FWIW: Does any of this mean NVIDIA can't go a different route? Nope. GT300 can use 512-bit interfaces with GDDR5, and they can be faster than 5870. They'll probably cost more if that's the case, but then it's still up to the consumers to decide how much they're willing to spend.
silverblue - Saturday, September 26, 2009 - link
I suppose if we end up seeing a 512-bit card then it'll make for a very interesting comparison with the 5870. With equal clocks during testing, we'd have a far better idea, though I'd expect to see far more RAM on a 512-bit card which may serve to skew the figures and muddy the waters, so to speak.Voo - Friday, September 25, 2009 - link
Hey Jarred I know that's neither the right place nor the right person to ask, but do we get some kind of "Ignore this person" button with the site revamp Anand talked about some months ago?I think I'd prefer this feature about almost everything - even an edit button ;)
JarredWalton - Friday, September 25, 2009 - link
I'll ask and find out. I know that the comments are supposed to receive a nice overhaul, but more than that...? Of course, if you ignore his posts on this (and the responses), you'd only have about five comments! ;-)Voo - Saturday, September 26, 2009 - link
Great!Yep it'd be rather short, but I'd rather have 10 interesting comments than 1000 COMMENTS WRITTEN IN CAPS!!11 with dubious content ;)
SiliconDoc - Wednesday, September 30, 2009 - link
I put it in caps so you could easily avoid them, I was thinking of you and your "problems".I guess since you "knew this wasn't the right time or place" but went ahead anyway, you've got "lot's of problems".
Let me know when you have posted an "interesting comment" with no "dubios nature" to it.
I suspect I'll be waiting years.
MODEL3 - Friday, September 25, 2009 - link
Hi Ryan,Nice new info in your review.
The day you posted your review, i wrote in the forums that according to my perception there are other reasons except bandwidth limitations and driver maturity, that the 850MHz 5870 hasn't doubled its performance in relation with a 850MHz 4890.
Usually when a GPU has 2X the specs of another GPU the performance gain is 2X (of cource i am not talking about games with engines that are CPU limited or engines that seems to scale badly or are poor coded for example)
There are many examples in the past that we had 2X performance gain with 2X the specs. (not in all the games, but in many games)
From the tests that i saw in your review and from my understanding of the AMD slides, i think there are 2 more reasons that 5870 performs like that.
The day of your review i wrote to the forums the additional reasons that i think the 5870 performs like that, but nobody replied me.
I wrote that probably 5870 has:
1.Geometry/vertex performance issues (in the sense that it cannot generate 2X geometry in relation with 4890) (my main assumption)
or/and
2.Geometry/vertex shading performance issues (in the sense that the geometry shader [GS] cannot shade vertex with 2X speed in relation with 4890)(another possible assumption)
I guess there are synthetic benchmarks that have tests like that (pure geometry speed, and pure geometry/vertex shader speed, in addition with the classic pixel shader speed tests) so someone can see if my assumption is true.
If you have the time and you think that this is possible and you feel like it is worth your time, can you check my hypothesis please?
Thanks very much,
MODel3