POPPUR爱换

 找回密码
 注册

QQ登录

只需一步,快速开始

手机号码,快捷登录

搜索
查看: 2980|回复: 8
打印 上一主题 下一主题

Nvidia\'s Chief Scientist Bill Dally : GPU , DX 11 and Intel\'s LRB

[复制链接]
跳转到指定楼层
1#
发表于 2009-8-12 12:31 | 只看该作者 回帖奖励 |正序浏览 |阅读模式
http://www.pcgameshardware.com/aid,692145/Nvidias-Chief-Scientist-Bill-Dally-about-GPU-technology-DirectX-11-and-Intels-Larrabee/News/

PC Games Hardware had the chance to conduct an extensive Interview with Nvidia's new chief scientist Bill Dally. He talks about his view on GPU technology, necessary improvements, DirectX 11 and the possible threat to Nvidia by Intel's Larrabee project.
Nvidia's Chief Scientist Bill Dally took the time during a visit to Nvidia's office in Munich, Germany, to talk to PC Games Hardware about his vision for future developments in the GPU area, the threat that Larrabee might pose to the comparably traditional GPUs Nvidia is making and some more topics revealed only in the whole text.

The following interview was conducted during a visit in Nvidia's local office in Germany before a roundtable talk with key representatives of the press from all over Europe. It is a transcript of a live conversation, so please pardon any grammatical errors we may have made.



Nvidias Chief Scientist Bill Dally [Source: view picture gallery]PCGH:Bill, thank you for taking the time to give us this interview and welcome to Germany! You have taken over the position of chief scientist at Nvidia in January I think.
Bill Dally: That's correct.

PCGH: So, could you please introduce yourself to our readers and tell us a bit about your background?
Bill Dally: My name is Bill Dally and up until January I was chairman at the computer science department at Stanford University, I've been in the academic world since the mid-eighties, I was a professor at MIT in Boston from 1986 to 1997 and then I joined the Stanford faculty in 1997 and been there ever since.

My expertise is mostly in the area of parallel computing. I've built a number of experimental parallel machines over the years: The J-Machine, the M-Machine, the Imagine Stream Processor and most recently a processor called ELM. I've also done a lot of work on interconnection networks, designing a number of networks that were used in Cray supercomputers in the nineteen-nineties. And more recently coming up with the Flattened-Butterfly and Dragonfly-topologies.

At Nvidia I'm chief scientist, which [really] involves three pieces of the job. One is to set technical direction[ing] and consult with the product groups to influence their technologies that make our products better going forward. The other is to lead Nvidia research which has a goal of looking 5 - 10 years ahead, identifying challenges and opportunities and developing strategic technologies to make Nvidia products more competitive. Then finally is an outreach component where I meet with customers and partners and university researchers, evangelize GPU computing and the CUDA programming language.

PCGH: What were your main reasons to leave the academic field and join Nvidia. What do you think, you can do at Nvidia, what you couldn't do in the academic field or at another company.
Bill Dally: Good question. I thought it was a real compelling opportunity to influence the future of computing. I think, we're at this critical juncture in computing, where we're going from serial computing to parallel computing. And Nvidia is uniquely positioned I think, to play a very important role, as the bulk of computing becomes parallel. And I thought it was a great opportunity to be a part of that.

PCGH: Well, you already told us a bit about being chief scientist. Does that also have something to do with actual microprocessor design or are you more like the visionary for Nvidia research and say "Well, let's head in that direction".
Bill Dally: It's a little bit of both. It's sort of more on the visionary end and I can sort of think the big thought. Nvidia has a very large number of extremely talented engineers and engineering managers that do the detailed design. I do get involved with them though when there are ideas that I want to see incorporated into future generation products or if there's a particular area where I think I can lend some expertise to help make the product better.

PCGH: You also mentioned that you have a strong background in parallel computing and you already headed more than one company producing actual microprocessors like Stream Processors Inc. and Velio Communications.
Bill Dally: Yes, I've had several start-up-companies over the years.

PCGH: Do you think that is going to be a major factor of influence on your work at Nvidia? Especially the network component between distributed processing nodes maybe?
Bill Dally: I think so. You know, you learn something from every company you're involved with and every job you have in your career and so I'm bringing everything I've learned to Nvidia. And try to make the best use of that to make Nvidia products better.

PCGH: Ok, but you're not saying, I'm only concentrating on building a very fast network of highly specialized processors to sort of replace GPUs in the traditional sense?
Bill Dally: No, I mean that would be like a solution looking for a problem. I like to look at it the other way. Our products are very good right now. And so you want to make any changes, you have to be careful that those changes are strategic and well thought out. We don't just want to apply technology because the technology is there, we want to see where is the need, how can we make our products better and what is the appropriate technology to meet that need.

PCGH: You have replaced David Kirk as a chief scientist. How do you think you will be a different kind of chief scientist. What will you do different than your predecessor.
Bill Dally: David is an Nvidia fellow now. And I think that i complement him well. I mean David is a real expert on graphics, in particular on ray tracing and I am a real expert at parallel computing. And so I think - both graphics and parallel computing are very important areas for Nvidia. And so I think, I'm gonna continuing many of the things David did. I think he did a wonderful job as chief scientist. I'm gonna try to expand our research in particular into a lot of areas that have to do with how we implement our GPUs.

He created Nvidia Research, and he focused it very much on the application end of things - how to deliver better graphics and a number of things having to do with GPU computing and in particular on ray tracing. We're continuing those activities and actually still consider them as really important areas. But also in our mission of looking 5 -10 years forward, it's very important to us to look at things like VLSI Design, computer architecture, compilers and programming systems. And so I'm expanding Nvidia Research into those directions so we can have a long-term view, identify strategic opportunities there as well as in the application spaces.

PCGH: Now you've mentioned it two times: Ray tracing. I'll have to go into that direction a bit. Intel made a lot of fuzz about ray tracing in the last 18 months or so. Do you think that's going to be a major part of computer and especially gaming graphics in the foreseeable future, until 2015 maybe?
Bill Dally: It's interesting that they've made a big fuzz about it while we've had a demonstration of real-time ray tracing at Siggraph last year. It's one thing making a fuzz, it's another thing demonstrating it running real-time on GPUs.

But to answer that, what I see as most likely for game graphics going forward is hybrid graphics. Where you start out by rasterizing the scene and then you make a decision at each fragment, whether that fragment can be renderer just with a shader calculating using local information or if it's a specular surface or if it's transparent surface or if there's is a silhouette edge and soft-shadows are important. Then you may need cast rays to compute a very accurate and photo realistic color for that point. So I think it's gonna be a hybrid version where some pixels are rendered conventionally and some pixels involve ray tracing and that gives us the most efficient use of our computational resources - using ray tracing where it does the most good.

PCGH: That doesn't sound like the traditional hybrid approach where you have the overhead of the sparse octree and doing geometry stuff on the ray tracing side of the engine and the pixel stuff on the rasterization side. You said, you're going down to fragment level - which sounds like running CUDA kernels for each fragment.
Bill Dally: We do that today. It's called pixel shaders. [laughs]

PCGH: Yes, but a different kind of pixel shader.
Bill Dally: Right, one thing a pixel shader could do is to cast a ray and only then when you go through space partition tree, which is the acceleration structure.

PCGH: So you would like to move the decision from the game developer to the compiler/driver level?
Bill Dally: No, the game developer can write the shader and the shader is making this decision.

PCGH: While we're at it. Intel also made a big fuzz about Larrabee.
Bill Dally: M-hm.

PCGH: They are aiming for a mostly programmable architecture there. They state that they have only 10 percent dedicated to graphics of the whole die, the rest being completely programmable according to Intel. And still they want to compete in the high-end with your GPUs. Do you think that's feasible right now?
Bill Dally: First of all, right now, Larrabee is a bunch of View-graphs. So, until they actually have a product, it's difficult to say how good it is or what it does. You have to be careful to read to much into View-graphs - it's easy to be perfect, when you have to do is be a View-Graph. It's much harder when you have to deliver a product that actually works.

But to the question of the degree of fixed function hardware: I think it puts them at a very serious disadvantage. Our understanding of Larrabee, which is based on their paper at Siggraph last summer and the two presentations at the Game Developers Conference in April, is that they have fixed function hardware for texture filtering, but they do not have any fixed function hardware either for rasterization or compositing and I think that that puts them at a very serious disadvantage. Because for those parts of the graphics pipeline they're gonna have to pay 20 times or more energy than we will for those computations. And so, while we also have the option of doing rasterization in software if we want - we can write a kernel for that running on our Streaming Multiprocessors - we also have the option of using our rasterizer to do it and do it far more efficiently. So I think it puts them at a very big disadvantage power-wise to not have fixed function hardware for these critical functions. Because everybody in a particular envelope is dominated by their power consumption. It means that at a given power value they're going to deliver much lower performance graphics.

I think also that the fact that they've adopted an x86-instruction set puts them at a disadvantage. It's a complex instruction set, it's got instruction prefixes, it only has eight registers and while they claim that this gives them code compatibility, it gives them code compatibility only if they want to run one core without the SIMD extension. To use the 32 cores or use the 16-wide SIMD extension , they have to write a parallel program, so they have to start over again anyway. And they might as well have started over with a clean instruction set and not carry the area and power cost of interpreting a very complicated instruction set - that puts them at a disadvantage as well.

So while we're very concerned about Larrabee, Intel is a very capable company, and you always worry, when a very capable company starts eating your lunch, we're not too worried about Larrabee at least based on what they disclosed so far.

PCGH: Will it be a major contributor or limiting factor: The driver team? Intel's integrated graphics have not a very good reputation for their drivers and it seems that both AMD and Nvidia are putting real loads of effort into their drivers.
Bill Dally: I think that you have to deliver a total solution. But if any part of the solution is not competitive, then the whole solution is not competitive. And our view based on what's been disclosed [until] today, is that the hardware itself is not going to be competitive and if they have a poor driver as well, that only makes it worse. But even a good driver won't save the hardware.


Nvidias Chief Scientist Bill Dally [Source: view picture gallery]PCGH: So you think - based on available information - that it [Larrabee] will, when it comes out in early 2010, will not be competitive compared to then high-end GPUs?
Bill Dally: That's our view, yes.

PCGH: Recently, you have introduced new mobile Geforce parts which support DX10.1 compliance. Did your work already have an influence on those parts or where they completed already when you joined Nvidia?
Bill Dally: That was completely before my time. Those were already in the pipe. I'm tending to look a bit further out, so...

PCGH: When do you think we're going to see products on the shelves, that were influenced by your work?
Bill Dally: I've had small influences on some of the products that are going to be coming out towards the end of this year but those products were largely defined and it was just little tweaks toward the end. It's really gonna be the products in about the 2011 time frame that I will be involved in from the earlier stages.

PCGH: That's about the same time, we're expecting the new game consoles. Is that also an opportunity Nvidia is looking forward to? What's your take on that?
Bill Dally: We're certainly very interested in game console opportunities. [smiles]

PCGH: With Microsoft's Windows 7, and thus DirectX 11, expected on the shelves from October 22nd, do you expect a large impact on graphics cards sales from that?
Bill Dally: I actually don't know what drives the sales that much, but I would hope that people appreciate the GPU a lot more with Windows 7 because of DirectX Compute and the fact that the operating system both makes every use of the GPU itself and also exposes it in it's APIs for applications to use.

PCGH: Independently whether it's a DX11 or a DX10 / 10.1 GPU?
Bill Dally: If it supports DirectX Compute, then it doesn't need to be DX 11.

PCGH: No, but there's a different level of DX Compute, if I'm not mistaken. It's the DX11 Compute, then the downlevel shader called DX Compute 4.0 and 4.1?
Bill Dally: No, you're exceeding my knowledge a bit right now.

PCGH: Speaking of DirectX 11: Were you personally surprised when AMD was showing DX11 hardware at Computex?
Bill Dally: No, not particularly. We had had some advanced word of that.

PCGH: Ok, going back to the new mobile parts: They have a very much improved GFLOPS/watt ratio, almost double over the previous generation. Is this the way to go for the future? To squeeze out the maximum number of FLOPS per watt?
Bill Dally: We consider power-efficiency a first-class problem. And it's driven starting from our mobile offerings, but it's actually important across the product line. I mean at every power-point - even at the 225-watt-top-of-the-line GPUs, we absolutely have to deliver as much performance as we can in that power-envelope. So a lot of the techniques that we use in our mobile devices, things like very aggressive clock-gating, power-gating are being used across the product line.

PCGH: How fast is the ALU/FLOP-ratio evolving? Is the move towards more FLOPS accelerating in the future?
Bill Dally: The texturing and FLOPS actually tends to hold a pretty constant ratio and that's driven by what the shaders we consider important are using. We're constantly benchmarking against different developers‘ shaders and see what our performance bottlenecks are. If we're gonna be texture limited on our next generation, we pop another texture unit down. Our architecture is very modular and that makes it easy to re-balance.

The ratio of FLOPS to bandwidth, off-chip bandwidth is increasing. This is, I think, driven by two things. One is fortunately the shaders are becoming more complex. That's what they want anyway. The other is, it's just much less expensive to provide FLOPS than it is [to provide] bandwidth. So you tend to provide more of the thing which is less expensive and then try to completely saturate the critical expensive resource which is the memory bandwidth.

PCGH: Do you think that DX Compute is going to facilitate a more rapid increase of GFLOPS per square millimeter than was seen in the past?
Bill Dally: We try to deliver as many GFLOPS per square millimeter as we can, regardless of how people program it. So I think DirectX Compute will enable both within Microsoft's software and also what third party windows software for more applications to use the power of the GPU. We're gonna deliver the absolute best performance per square millimeter regardless of how people program it. That's something that we constantly, in our engineering effort, are striving to improve.

PCGH: Do you think a large leap in available bandwidth would be necessary for next generation hardware - like for DirectX 11 with it's focus on random R/W (Scatter, Gather operations etc.) which should benefit greatly from more or at least more granular memory access.
Bill Dally: Almost everything would benefit from more bandwidth and being able to do it at a finer grain. But I don't think that there's gonna be any large jumps. I think we're gonna evolve our memory bandwidth as the GDDR memory components evolve and track that increase.

PCGH: But with GDDR5 becoming widely available, there obviously is a lot of additional memory bandwidth to be had at a given buswidth.
Bill Dally: It‘s... It depends on how you define "a lot". It's a jump, but it's not an enormous jump.

PCGH: I'd say about 50 percent more.
Bill Dally: M-hm.

PCGH: That'd count as a jump for me.

PCGH: With GT200 Nvidia has introduced double precision capability even into consumer line graphics cards. Do you think this is going to be widely used in the foreseeable future?
Bill Dally: The double precision is mostly to support our Tesla product line. It is critically important to the people who do scientific computing on our GPUs to have double precision. So going forward, the GPUs that we aim at the scientific computing market will have even better floating point double precision than what's in GT200. That ratio of double precision to single precision which is now one double precision operation per eight single precision operations will get closer. An ultimate ration to target is something like two to one. Right now, I don't think there's much in the consumer space that uses double precision. Almost all the consumer applications we see just use single precision floating point.


Nvidias Chief Scientist Bill Dally [Source: view picture gallery]PCGH: Would it make sense to use double precision in the future? Maybe for physics calculations? Is there a need to have higher definition?
Bill Dally: I don't see a pressing need for that. Now, game physics and physics that are done for industrial and research applications are different in a big way in the sense that for game physics it only needs to look good, it doesn't actually have to be right. Whereas if you're designing an aircraft, it has to be right. And for that reason, the people who do game physics are usually pretty satisfied with single precision.

PCGH: Would it make sense then, to develop different chips for the professional market like the Tesla products? I noticed you specifically mentioned Tesla, not Quadro. Is there an in-between-market, where some Quadro users could use 64 Bit or is it Tesla on the one and Geforce/Quadro on the other side?
Bill Dally: It's interesting: I think Quadro could wind up on either side of this. There may wind up Quadro applications that may want to use double precision.

PCGH: And what about designing specific chips for specific markets in that regard? I mean you're designing with Tegra also very specific chips for the mobile market.
Bill Dally: And also even for each of our GPU families like G80 and GT200 we bring out a line of chips that are aimed at different price points. You could imagine those evolving into, you know, different features for different markets.

PCGH: We already were talking about physics a bit. You have contracts with major publishing studios like EA or THQ and others for using physics, the Physx API, in their games. But there are very few AAA-titles actually using GPU accelerated physics. Do you think this is going to change in the near future? Or what is the reason behind it, that games like Crysis or Crysis 2, which isn't yet available, do not support GPU accelerated physics yet?
Bill Dally: I am not as far been into the game developers as our dev-tech-people are. But I am optimistic that the game developers are going to embrace GPU accelerated physics just because it makes the games much more compelling. If you've seen the demos of the same game with physics turned on and off - it's a world of difference.

PCGH: GPU physics is a whole different level of physics in the game. I mean there are tons of particles flying around and fluid simulations which are just not there with CPU based physics. Do you think that's a problem (or an opportunity) with game consoles, because they don't have GPU accelerated physics and we're seeing more an more developers going multi-platform.
Bill Dally: I would hope that it's an opportunity because if the future platforms use our GPUs and they can GPU accelerate physics in just the same way that we do it on our PC platforms.

PCGH: Talking about multi-platform titles. Current generation game consoles use hardware that is four to five years old - at least the design. The PC's graphics can be better than on the consoles. Is it difficult to motivate game developers to build better graphics for PC games when they are developing a multi-platform title which may end up being graphics bound on the consoles?
Bill Dally: I don't have direct experience with that, our devtech people tend to work with the game developers. They seem to be getting them to use our GPUs to the best of their capabilities. But I don't know quite what challenges they face in doing that.

PCGH: Another topic: In contrast to your competitor, Nvidia's GPUs, at least the high-end ones, have in the last couple of years always been very large, physically. AMD is going the route of having a medium-sized die and scale it with X2 configurations for high-end needs; Nvidia is producing very large GPUs. Is that a trend which could change in the future or don't you think you have reached the limits of integration in single-chips, the "Big Blocks of Graphics"?
Bill Dally: We're trying to always deliver the best performance and value to our customers and we're gonna continue doing that. And for any given generation there's an economic decision that has to be made about how large to make the die. Our architecture is very scalable, so the larger we make the die, the more performance we deliver. We also deliver duplex-configurations as in GTX 295 and so, if build a very large die and then also put two of them we can deliver even more performance. And so for each generation we're gonna to the calculation side what is the most economic way of delivering the best performance for our customers.

PCGH: Is this decision driven more by financial economics or power economics? Like if we go for this and that power envelope we can squeeze a dual configuration on one PCI Express card.
Bill Dally: It's one combined calculation. We are trying to deliver the best performance subject to a number of constraints that set the edges of the envelope. Some of those constraints are financial and some of those constraints are physical things like board area and things like power, you know, how large your die can be and fit that particular type of package.

PCGH: Starting with G80, we've seen the integration of shared memory, scratch pads and the like, for data sharing between individual SIMDs. This was obviously done specifically for GPGPU and stuff. Are we going to see more of that non-logic-area in future GPUs?
Bill Dally: First of all, we like to call it GPU computing, not GPGPU. GPGPU refers to using shaders to do computing, while GPU computing uses the native compute capability of the hardware. To answer that particular question, the answer is yes, although it's gonna be over many generations of future hardware. We see improving the on-chip memory system as a critical technology [in order] to enabling more performance where the off-chip bandwidth is scaling at a rate that's slower than the amount of floating point that we can put on the die. So we need to do more of things like the shared memory that's exposed to the multiple threads within a cooperative thread array.

PCGH: Do you envision future GPUs more like the current approach, where you have one large scheduling block and then the work gets distributed to each cluster or is it going to be a more independent collaboration of work units where each rendering cluster, each SIMD unit gets more and more independent?
Bill Dally: In the immediate future, I think things are gonna wind up very much the way they are today. I'm viewing it as a flat set of resources where the work gets spread uniformly across them. I think ultimately, they are going forward, we are going to have a need to build a more hierarchical structure into our GPUs both in the memory system and in how work gets scheduled.

PCGH: So that's a more distributed approach then for computing in graphics or computing in whatever the operating system uses the GPU for.
Bill Dally: Yeah.

PCGH: With DirectX Compute and Open CL we have two programming standards/languages for using the GPU other than for strict graphics - you also support CUDA, so you have to support three standards for programming GPUs. Isn't that a bit too much choice for developers? I mean, you're spreading your resources out towards three different branches.
Bill Dally: We are and we aren't. In some sense all three of those branches plus the FORTRAN branch, that you didn't mention, all feed the underlying CUDA architecture. The CUDA parallel computing architecture is based on the abstraction of the cooperative thread array or thread block that shares memory. And all four of those standards basically feed that same underlying interface and do it in a fairly consistent way. So there is a certain amount of software work that's required to support each of those, but a lot of the underlying work is shared.

PCGH: Ok, so it's not four times as much work as just supporting one standard.
Bill Dally: Right.

PCGH: CUDA is quite flexible, as we've seen, your GPUs are also programmable in FORTAN as you just mentioned. Is it also possible to extend that to mobile parts, like future generations of Tegra for example?
Bill Dally: Yes, future generations of Tegra will have exactly the same Streaming Multiprocessor core as part of their GPU as the rest of the product line. That's not in the current generation of Tegra, but in the future, it'll be the case.

PCGH: So, I can just take my CUDA program, port the host-based part of code to support the ARM-architecture and then it'll run on a smartphone for example.
Bill Dally: That's the goal, yes. We want CUDA to run across the entire product line. CUDA everywhere.

PCGH: So it is more important to you to support CUDA everywhere, than to make a decision between the host architecture, between ARM and x86 for example?
Bill Dally: I don't think the instruction set of the CPU matters a whole lot. In running the CUDA program, which is running on the GPU, even on the GPU part we have an abstract with PTX which will then translate to support multiple generations of Streaming Multiprocessors that have slightly different native assembly languages. So it doesn't matter what the native assembly language of the host is.

PCGH: You don't see a need for unification in that area, so that consumer can have their home electronics from the PC over the game consoles maybe even to the fridge running the same applications like arranging their schedules from the screen on the fridge as well as on the game console in the living room or the PC?
Bill Dally: I think that consumers probably do want consistency, but I'm not sure that the host instruction set matters a whole lot providing that consistency. I think it can be provided independent of that.

PCGH: In game consoles with PowerPC, there's a different architecture than in mobile and the PC space. So you don't see problems there, with the host interface not mattering a lot?
Bill Dally: Yeah. I think there's a byte ordering issue that comes up with PowerPC but even there we can interact with just about any host processor you may want to put on the other end of the interface.

PCGH: If one were to integrate the latency- and the throughput processors into one die or onto one package, would that still be heterogeneous computing in your mind?
Bill Dally: I think how you package really doesn't matter. It's heterogeneous computing where you have two types of cores, you have the latency optimized core and the throughput optimized core. Now whether you integrate them or not is really mostly an economic question. Whether it's more economical to fabricate a single die with both of them on it, as our Tegra product is, or whether you're better off having each on separate die both for manufacturing economics and also to give you flexibility of mixing and matching the latency processor and throughput processor in different ways.

PCGH: What do you think: What is the area, current GPUs, throughput processors or however you may call them, are lacking most? Which is the area which should be improved at the forefront?
Bill Dally: Well, they're actually pretty good, so it's hard to faults with them. But there's always room for improvement. But i think it's not about wanting, but about opportunities to make them even better. The areas where there's opportunities to make them even better is mostly in the memory system. I think that we're increasingly becoming limited by memory bandwidth on both the graphics and the compute side. And I think there's an opportunity from the hundreds of processors we're at today to the thousands of cores we're gonna be at in the near future to build more robust memory hierarchies on chip to make better use of the off-chip bandwidth.


PCGH: Thank you very much for your time, Bill!
9#
发表于 2009-8-14 09:22 | 只看该作者
4# westlee
Bill 是图形计算方面的大行家,事实上,Stanford 的图形方面根基极深,从Imagine 到Brook 是完整的研究团队。
nVidia 请到Bill 是nVidia 的荣幸。
至于说对于具体技术,大老板对于细节,一般是忽略的。
回复 支持 反对

使用道具 举报

8#
发表于 2009-8-14 01:21 | 只看该作者
虽然我不懂 但我也要看看
回复 支持 反对

使用道具 举报

7#
发表于 2009-8-14 00:56 | 只看该作者
没准人家故意说的,还记得NV在G80上一直散步谣言说不考虑Unified Shader,有的时候人在江湖,各为其主
……,可以理解。
回复 支持 反对

使用道具 举报

westlee 该用户已被删除
6#
发表于 2009-8-13 23:24 | 只看该作者
提示: 作者被禁止或删除 内容自动屏蔽
回复 支持 反对

使用道具 举报

5#
 楼主| 发表于 2009-8-13 22:59 | 只看该作者
这个吧:p

PCGH: I'd say about 50 percent more.

Bill Dally: M-hm.

我想他这里似乎是对 GDDR5 的带宽提升有点看不起的意思,然后提到 50% 的时候,他可能就是耸了一下肩膀,估计还不是不擅长应对,也许他有几句话很想吐出来也不定 :p
回复 支持 反对

使用道具 举报

westlee 该用户已被删除
4#
发表于 2009-8-13 22:10 | 只看该作者
提示: 作者被禁止或删除 内容自动屏蔽
回复 支持 反对

使用道具 举报

3#
发表于 2009-8-13 19:55 | 只看该作者
Bill 还是做处理器为主的,imagine 是流处理器的经典。
他在paralle computing 中比较擅长的还是特定应用的计算,最近Stanford 的方向是PPS。
回复 支持 反对

使用道具 举报

2#
发表于 2009-8-13 12:11 | 只看该作者
其实从Bill的background应该可以大致猜出五年以后NV是什么样子.
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 注册

本版积分规则

广告投放或合作|网站地图|处罚通告|

GMT+8, 2025-8-28 01:15

Powered by Discuz! X3.4

© 2001-2017 POPPUR.

快速回复 返回顶部 返回列表