|
PC Graphics: The Year Behind, The Year Ahead
December 23, 2008
By Jason Cross
We say this every year, and every year it's true: No home computer technology moves as fast as the graphics business. Standards update frequently, the rate of computational power trends faster than Moore's Law. All this creates an exciting rush to not only be the fastest, but a feature-set push and compatibility bragging rights that you just don't see in many other markets. It also means that "news" comes and goes faster than you could possibly cover it.
2008 is no exception, and we don't expect 2009 to be slow, either. In fact, it might even be a busier year for graphics than 2008 was, depending on how a few things go. Join us, won't you, as we struggle to put the PC graphics business of 2008 into perspective and gaze into our 2009 crystal ball. Continued...
In our year-end article last year, we claimed DirectX 10 was a disappointment. Games that supported DX10 showed very minor differences in visual quality often at huge costs in performance. Fortunately, 2008 saw that situation turn around a bit. Microsoft's original promise that DX10 uptake would be faster than DX9's, with vastly improved visual quality at the same frame rate or improved frame rates with similar quality, were way too eager.
In truth, it has taken some time for DX10 to become mainstream in games, but it has finally arrived—it's no longer news that a game has a DX10 mode. What's more, modern hardware combined with better drivers and a better understanding of DX10 by game developers means that you don't usually pay a hefty price for running a game in DX10 mode. In some games, like Far Cry 2, DX10 can actually outperform DX9 while offering some subtle visual quality improvements. Continued...
If 2007 was a tough year for ATI and a great year for Nvidia (and it was), then this year we saw the reverse. In 2007, Nvidia produced some killer products at prices everyone could afford, like the GeForce 8800 GT, while ATI floundered with Radeon HD 3000 series cards that couldn't quite keep up the pace.
This year, Nvidia launched its new generation GeForce GTX 260 and 280 cards, but they were huge, power-hungry, and expensive. In many ways, the GT 200 GPU that powers those cards is very much like a scaled-up G92, offering no new major new features, and achieving their impressive speed with an almost linear increase in transistor count.
ATI followed this up with the first card to take advantage of an engineering principle put into place years ago—the idea that you build a GPU for the large middle of the market, and rely on dual-GPU solutions for the smaller (but lucrative) high-end. The Radeon HD 4850 and 4870 both proved to be excellent cards that have only gotten considerably better with more mature drivers. The cards were so good in their price brackets, in fact, that Nvidia was forced to dramatically drop prices of the 9800 GTX and GeForce GTX 200 series cards to remain competitive.
The sudden and dramatic price drops forced Nvidia to project far lower than expected earnings forecasts to their investors. Around the same time, the company was forced to take a one-time charge of $200 million to deal with some faulty laptop graphics chips that were causing notebooks from several vendors to fail. The financial bad news of lower earnings and margins and hundreds of millions for making up for faulty chips was bad, but the PR black eye suffered was perhaps even worse.
For the first time in a long time, Nvidia's graphics cards weren't necessarily viewed as best-in-class for performance or reliability. The end result? Nvidia's stock dropped by a third in one day, and now sits at about $9–10 per share, down from over $30 a share a year ago.
Of course, ATI is now owned by AMD, a company seemingly designed to bleed money. That company's stock is in the toilet as well, but it hasn't done well for a long time. Its graphics products were a bright point in the company portfolio this year.
We end 2008 with both companies on fairly equal footing for most of their products. The overall performance and cost hasn't been this close in a long time. But 2009 will likely see the market turn away from just graphics and video performance as a measure of worth, to the ever-broadening world of general purpose computing on GPUs. Continued...
Nvidia has had its CUDA (Compute Unified Device Architecture) initiative for a long time now, and ATI has had its own stream computing push as well (now branded ATI Stream). 2008 is unique in that this was the year that both companies really started to make a push to drive it out of the lab and HPC market and into the average consumer desktop PC. GPU-accelerated clients for Folding@home were only the beginning.
There are now video conversion applications that use the power of the GPU. With its acquisition of PhysX, Nvidia plans to push hard for GPU-accelerated physics effects in games. In fact, the company has come to branding its products "Graphics Plus" to drive home the idea that they're useful for so much more than just 3D accelerated games and other applications.
That movement is still in its infancy, though. The real benefit of GPU acceleration for general applications is still extraordinarily limited. And why wouldn't it be? Developers making applications for the general public don't want to write CUDA applications that will run only on Nvidia-based cards or ATI Stream (Brook+, usually) apps that run only on ATI cards. They want standards that will allow them to run on any GPU, and that's what we'll see in 2009. Continued...
OpenCL is a standard managed by the Kronos Group, the same group that manages the OpenGL standard. It's a standard programming model for targeting parallel computing devices like GPUs, and can be used to write all the sorts of applications that benefit from massively parallel execution—all the stuff that people use CUDA and ATI Stream for today. Nvidia, Intel, AMD, and others sit on the OpenCL board and both Nvidia and ATI have pledged to fully support the standard and have drivers ready post-haste.
The 1.0 specification was just ratified, and you'll see OpenCL drivers from the graphics guys in the first half of '09. Apple is one of the big pushers of the standard, and is expected to use it pretty broadly in the next version of OS X (Snow Leopard). Of course, it'll be available on Windows and likely Linux as well.
But OpenCL isn't the only standard way that developers will be able to harness the power of GPUs for more general parallel computing tasks. There's also DirectX 11, scheduled to release in the same general timeframe as Windows 7 (probably in the second half of 2009). DirectX 11 will also be available for Windows Vista, and some DX11 features will run on the DX10 hardware that's available today. One such feature is Compute Shaders, some new functions of the Direct3D API that lets developers more directly access the GPU for general computing tasks.
The cool part of Compute Shaders is that it keeps developers "inside" DirectX, so Compute Shaders can be used in games and write data out to arbitrary memory locations that can then be used in pixel, vertex, or geometry shaders. Likewise, the output from those graphics-related shaders can be read into Compute Shaders. This makes the GPU a much more efficient target for game-related physics, particle systems, AI, etc. Continued...
DirectX 11 adds more than Compute Shaders, though. It's a superset of DirectX 10.1, taking all those requirements and features and adding new ones like tessellation—the ability to turn rough low-polygon geometry into detailed high-polygon geometry all on the graphics card. Tessellation in DX11 is more flexible and robust than the tessellation features of the Xbox 360 (which is available on modern ATI graphics cards).
However, this is one feature that will require new hardware; no current graphics card will handle DX11's tessellation features. DX11, along with new Windows Driver Model 1.1 drivers, will do more to take advantage of multi-core CPUs to speed up graphics tasks, which should help eliminate bottlenecks in many games.
What of Nvidia's acquisition of PhysX? Will we see lots of big-name PC games in 2009 that use PhysX with GPU acceleration? That one is hard to predict. Mirror's Edge is the first AAA title to really make use of PhysX acceleration on Nvidia cards, and it's coming in January. It uses Unreal Engine 3, which integrates PhysX pretty heavily out of the box, so it's not too hard for Nvidia's engineers to do the work for EA necessary to get GPU acceleration working. Will there be any AAA games that don't use UE3 that offer GPU accelerated physics?
Right now, all the other upcoming accelerated PhysX titles are second-tier titles at best. That may continue through 2009, but we expect Nvidia to spend a lot of time and money working with developers (translation: doing the work for them) to enable PhysX in games. Nvidia didn't acquire PhysX for nothing, and the middleware is essentially free for game developers—if they don't sell more graphics cards with it, it's a waste of the company's money. Continued...
So we know what 2009's main targeted feature sets will be: OpenCL and DirectX 11 support will be new and will enable new applications to use GPUs from any vendor with the right drivers, but most games will still use DirectX 10. What will the hardware landscape look like?
It has long been rumored that Nvidia is working on shrinking their 65nm GT 200 GPU (found in GeForce GTX 260 and 280 cards) to 55nm, and we'll probably see both single and dual-GPU products based on this new chip very early in the year. ATI won't sit on their hands, though. An updated version of their RV770 GPU (found in Radeon HD 4870 and 4850 cards) is likely to make a first quarter appearance as well.
These chips may even shrink the die from the current 55nm manufacturing process to 40nm. In both cases, this means faster overall performance at better prices, and better performance per watt, but no major new features.
Through the first half of the year, updated versions of the current chip designs from ATI and Nvidia should trickle down throughout the product line, giving people cards that are roughly equivalent to the current products but with better performance, lower prices, and perhaps less power utilization.
Both ATI and Nvidia will probably have DirectX 11 class hardware ready to roll within a couple months of Windows 7's launch, perhaps even preceding it. These will be new designs that offer some major new functionality, including support for all of DX11's new features. Of course, they'll probably also be the fastest cards you can buy for DirectX 10 games, too.
The battle for graphics supremacy will start to shift in 2009 from bragging about the best frame rates and image quality in games to bragging about the best performance in non-game applications that use OpenCL and DX11's Compute Shaders. At the same time, Nvidia will continue to support and promote CUDA and claim it as a serious benefit of their cards for general consumers. The new hardware architectures launching later in 2009 will likely be designed with specific hardware features to make these GP-GPU type applications run faster than ever.
Nifty applications like Badaboom notwithstanding, we're not exactly sure that CUDA-specific applications, or ATI Stream applications for that matter, will ever become commonplace enough for general consumers to base their purchase decisions on. Historically, cross-vendor standards like OpenCL and DX11's Compute Shaders are the sort of things that really take hold and see broad adoption in consumer applications. That won't stop the marketing departments, though.
http://www.extremetech.com/print_article2/0,1217,a%253D235336,00.asp |
|