Feature: PC graphics is one of the fastest-moving tech industries in the world. We take a look back at the progress made in 2005 and look ahead to what we can expect in 2006.
Even though core PC components like CPU, RAM, and hard drive are in fact more essential to your rig's operation, it's the graphics cards that seem to get all the attention from hardcore PC enthusiasts. But there are reasons for this: CPUs typically push forward at the rate of Moore's Law, but graphics cards tend to exceed this rate of computational growth. What's more, GPUs are responsible for the most visible part of your PC—the very stuff shining out at you from your monitor.
In other words, no single component can have a more noticeable impact on the performance or quality of your PC—when you're playing 3D games or working with interactive 3D content creation—than the graphics card. The difference between a $150 and $400 CPU can't be seen or felt to the same degree as the same price disparity in your graphics card.
Because the PC graphics industry moves so fast, every year is an exciting one. Join us as we review the progress made in PC graphics in 2005, and look ahead to what we can expect in 2006. Continued...
2004 was a pretty big year for graphics, with the introduction of Nvidia's new graphics architecture (NV40), which pulled the company out of the GeForce FX doldrums, got rid of those pesky drastic application optimizations that bordered on "cheating" in the drivers, and brought us Shader Model 3.0. Also in 2004, PCI Express started to take over the market. PCIe spread so quickly that ATI's rapid adoption and the availability of its PCIe graphics cards helped that vendor sign up OEM customers and gain market share rapidly.
2005, by comparison, was kind of slow. It's Microsoft's fault, really. Microsoft did such a good job of making DirectX 9 forward-looking, that we haven't had really major new graphics card features for a couple of GPU generations. The move to DirectX 9 with the Radeon 9700 (in mid-2002) was a giant leap from DirectX 8. From then until now, graphics manufacturers have been simply making ever-faster cards that adhere to the spec and implementing more of the spec over time—from Shader Model 2.0 to 2.0b to 3.0. Today, three and a half years later, GPU makers are still cranking out DirectX 9 cards. And this trend won't up until at least late summer 2006. At that point, GPU makers will finally move to DX10 in anticipation of Windows Vista.
Nvidia came out with some innovation, such as SLI, but those didn't appear in 2005. For Nvidia, 2005 was kind of a "more of the same" year. The company started off by moving its GeForce 6 line down to very cheap budget levels with the
Nvidia hasn't stopped making GeForce 6 series cards, though. The
Though ATI was first out of the gate with a DirectX 9 card, and the architecture of the Radeon 9700 (the R300 architecture) dominated for quite some time, the company really took its sweet time incorporating more of the DX9 spec. This year was pretty rough for ATI. The R520 architecture, finally bringing Shader Model 3.0, was set to be released in the second quarter of the year. Unfortunately, R520 fell victim to a small logic bug that was replicated throughout the chip, and it took ATI several revisions to finally find and fix it. This pushed R520 back to the fall, so instead of beating Nvidia's GeForce 7 cards by a month or two, they trailed by 3 or 4 months. When the
Now that the entire X1000 line is on the market, we've already seen some pretty drastic discounting of X1600 and X1300 cards. At launch, they simply didn't compete well with similarly priced offerings from Nvidia. Now that they're a lot cheaper, they're a bit more attractive.
Then there's
Far more interesting than what happened in 2005 should be what happens in 2006. We expect the year to start off with new high-end graphics chips from Nvidia and ATI (though not necessarily in that order). The logic bug that held back R520 never existed in R580, its high-end follow-up, so that product is right on time and will probably follow a lot closer to R520 than was originally planned. We don't have firm details on what R580 will be, but we expect it to be quite a bit faster than a Radeon X1800 XT.
Nvidia has been stuck on 110nm manufacturing for its performance graphics cards but has already begun to transition to 90nm with chipset parts. The company's first major product of 2006 will likely be a new high-end update to the GeForce 7800. By moving from 110nm to 90nm, it can add even more pipelines and maybe crank up the clock rate. If we had to guess, we would say that the GeForce 7800 GTX 512 card will be very short-lived and will all but disappear as soon as this new card is released.
The real excitement will come in the later half of the year, though. Nvidia is hard at work on a true next-gen architecture (probably code-named G80), and ATI is cranking away at the design for the R600 series of chips. These designs are sure to incorporate the new DirectX 10 standard or at the very least some of the features. With DX10 coming together with Windows Vista in the last quarter of the year, both ATI and Nvidia want to be right there with a full line of DX10 cards. We might even see a release as early as the summer, with cards promising great DX9 performance and future-proof DX10 support when Vista finally arrives. Of course, Vista will use DirectX 10 to draw the desktop itself, so 3D graphics performance is going to be critical for a lot more than just games.
It's our understanding that DirectX 10 is going to have no capabilities bits (bits defined in the driver that tell the system what DirectX features you card does and does not support). In theory, it's going to be an all-or-nothing situation. Either your card fully supports the DX10 spec or it doesn't. No more of this "Shader Model 2.0, 2.0b, 3.0" stuff, with other cap bits for things like geometry instancing. This is a big deal for developers, because it means that if they have a "DX10" card they can count on a certain feature set. But it might make for some really dubious marketing by the graphics companies. If a card comes out with "some" DX10 features, or "DX10-like" features, but isn't truly DX10 compliant, we can expect the company to come up with all sorts of ways to try to market the card so that it looks like a DX10 card when it isn't.
Of course, DX10 brings a whole lot more to the table, including a unified pixel and vertex shader interface, geometry shaders, great performance for small batches of geometry or textures, much less severe penalties for state changes, and an entirely new DLL structure that should reduce driver overhead on the CPU. If that all sounds like a bunch of gobbledygook to you, let me put it this way: DX10, with DX10 supporting hardware, will be capable of the kind of stuff that will put the Xbox 360 and PlayStation 3 to shame.
Beyond 3D Rendering
Though Windows Vista and DX10 are the most exciting major shifts in graphics in years, there will be some other very slick stuff in 2006. ATI has already demonstrated the power of the GPU for general purpose computing tasks (commonly called GP-GPU) with the Havok has announced its intention to begin using SM 3.0-capable graphics cards for physics acceleration, for instance. Is physics benchmarking going to become part of our standard graphics card reviews? You never know!
With the Xbox 360 just released and the PlayStation 3 and Nintendo Revolution coming up in 2006, it's easy to think that all the focus will be on gaming consoles. Perhaps not. Microsoft makes insane money from Microsoft Windows, and Job One in Redmond is to make Vista a massive success. Part of doing that—a big part—is going to be convincing gamers that a PC with Windows Vista and a DX10 graphics card is the thing to have. It's going to be an exciting and competitive year in PC graphics, with a race to incorporate and improve GP-GPU capabilities together with rush to push out the best-performing, most complete DirectX 10 graphics card.
Related articles: