|
|
http://we.pcinlife.com/thread-632666-1-1.html
前段时间,我已经完整地公布G80规格,特别是REALSTIC HDR LIGHTING EFFECTS WITH ANTI-ALIASING PROVIDES TWICE THE PRECISION OF PREVIOUS GENERTIONS:thumbsup:
现在好了,INQ终于对G80完整解密w00t)
http://www.theinq.com/default.aspx?article=35483
THIS ARTICLE REVEALS all of the important information regarding GeForce 8800 series, which is set to be released to the world on November 8th, 2006 in San Jose. We have learned that during traditional Editor's Day in San Francisco nVidia kept its rules, so "no porn surfing" and "no leaks to the Inquirer" banners were shown. But, we have no hard feelings about that. It is up to the companies to either respect millions of our readers, including employees of Nvidia or... not.
As you already know, Adrianne Curry, a Playboy bunny, America's Next Top Model star and an actor from My Fair Brady is the demo chick for G80. After we posted the story, we received a growl from Graphzilla, but we are here to serve you, our dear readers. However, this was just a story about a person who posed for the G80. Now, it's time to reveal the hardware. Everything you want to know, and don't want to wait for November 8th - lies in this article. Get your pop-corn ready; this will be a messy ride.
For starters, the 8800 launch is a hard one, so expect partners to have boards in store for the big day's press conference at 11AM on the 8th. The board delivery will go in several waves, with the first two separated by days. The boards were designed by ASUSTeK, and feature a departure from usual suspects at Micro-Star International. This is also the first ever black graphics card from nVidia. Bear in mind that every 8800GTX and 8800GTS is manufactured by ASUS. AIBs (add-in board vendors) can only change the cooling, while no overclocking is allowed on 1st gen products. Expect a very limited allocation of these boards, with UK alone getting a mediocre 200 boards.
The numbers
G80 is a 681 million transistor chip manufactured by TSMC. Since Graphzilla opted for the traditional approach, it eats up around 140 Watts of power. The rest gets eaten by Nvidia's I/O chip, video memory and the losses in power conversion on the PCB itself.
If you remember the previous marchitecture, the G70 GPU embedded in 7800GTX 256MB, you will probably remember that the Pixel and Vertex Shader units worked at a different clock speed. G80 takes it one step forward, with a massive increase in clocks of Shader units.
GigaThread is the name of the G80 marchitecture which supports thousands of executing threads - similar to ATI's RingBus, keeping all of the Shader units well fed. G80 comes with 128 scalar Shader units, which Nvidia calls Stream Processors.
The reason Nvidia went with SP description is a DirectX 10 function called Stream Output, that those Shader units will now work on Pixel, Vertex, Geometry and Physics instructions, but not all at the same time. The function, in short, enables data from vertex or geometry shaders to be sent to memory and forwarded back to the top of GPU pipeline in order to be processed again. This enables developers to put in more shiny lighting calculations, physical calculations, or just more complex geometry processing in the engine. Read: more stuff for fewer transistors.
In order to enable that, Nvidia pulled a CPU approach and stuffed L1 and L2 cache across the chip. On the other hand, you might like to know that both Geometry and Vertex Shader programs support Vertex Texturing.
And when it comes to texturing itself, G80 features 64 Texture Filtering Units, which can feed the rest of the GPU with 64 pixels in a single clock. For comparison, GF7800GTX could manage only 24. Depending on the method of texture sampling and filtering used, G80 ranges from 18.4 to 36.8 billion texels in a single second. Pixel wise, the G80 churns out 36.8 billion of finished pixels in a single second.
When it comes to RingBus vs. GigaThread, DAAMIT's X1900 can branch granularity of 48 Pixels, X1800 can do 16. GeForce 8800GTX can do 32 pixel threads in some cases, but mostly the chip will be able to do 16, thus you can expect Nvidia to lose out on GPGPU front (for instance, in Folding@Home stuff).
However, Nvidia claims 100% efficiency, and we know for sure that ATI is mostly running in high 60s to high 70s in percentage points.
![]() |
|