POPPUR爱换

 找回密码
 注册

QQ登录

只需一步,快速开始

手机号码,快捷登录

搜索
12
返回列表 发新帖
楼主: goodayoo
打印 上一主题 下一主题

预感NV可能在GT300上面栽大跟头,就像AMD当年的2900XT那样的情况。

[复制链接]
21#
发表于 2009-9-17 14:38 | 只看该作者
ocl比cuda难写,而且尽管ocl是通用标准,现在依然只能在nv的gpu上跑,amd?丫的ocl组件只能用cpu模拟,gpu版还不知道猴年马月呢,a把所有人力物力都投在硬件的开发上了,软件支持差的太远太远
JoshuaChang 发表于 2009-9-17 15:20


amd这样做很正确,在性能还落后nv的情况下拼ocl就是浪费时间。ocl等以后有时间在慢慢来也可以,毕竟还刚开始。
回复 支持 反对

使用道具 举报

22#
发表于 2009-9-17 14:48 | 只看该作者
n的开发版驱动5月份就支持ocl了,前两天泄露的190.89已经全面支持cuda/ocl/cs,amd的现在才仅支持用cpu模拟方式跑ocl,无论ocl还是cs,一直都是nv首先跟进,amd干了啥,光知道用硬件支持打嘴炮,你软件赶紧滴跟上 ...
JoshuaChang 发表于 2009-9-17 14:27

这个,amd的那个不能叫模拟的,就是支持。ocl本身就是同时支持显卡和cpu的。

amd推2.0时候NV还发了点牢骚说只推cpu,没加入显卡的。

不过amd的意思好像是说打算5870出来后再推2.0正式版时候加入显卡支持。
回复 支持 反对

使用道具 举报

23#
发表于 2009-9-17 14:51 | 只看该作者
amd这样做很正确,在性能还落后nv的情况下拼ocl就是浪费时间。ocl等以后有时间在慢慢来也可以,毕竟还刚开始。
红发IXFXI 发表于 2009-9-17 14:38


问题是,amd一直在追求硬件,ogl都那么多年了支持还是那么烂,9550+x1300时代用了几年a的ogl,彻底把我击溃了,标称都是完整支持ogl,结果一个mental mill的carpaint效果出来居然是线框……
回复 支持 反对

使用道具 举报

24#
发表于 2009-9-17 14:51 | 只看该作者
前面几条还基本同意,可第4条实在太雷了。intel要称得上怕的只有IBM吧?CUDA弄不过intel的,不论技术研发,推广,还是资金都不是一个量级的。intel的开发器可是出了名的好使的。再说,AMD可不能想着CPU倒了凭GPU来活 ...
alouha 发表于 2009-9-17 13:43


第四条是,对Intel来说,我想远远说不上怕,不过以Intel的作风,估计也不会想留任何机会给NV吧…………

相对AMD来说,NV这几年发展太快,而且看起来野心也不小,容易让人有威胁感O(∩_∩)O哈哈~
回复 支持 反对

使用道具 举报

25#
发表于 2009-9-17 15:07 | 只看该作者
纯纯的看完,纯纯的支持!
回复 支持 反对

使用道具 举报

pikaqiuuuu 该用户已被删除
26#
发表于 2009-9-17 17:34 | 只看该作者
提示: 作者被禁止或删除 内容自动屏蔽
回复 支持 反对

使用道具 举报

27#
发表于 2009-9-17 17:39 | 只看该作者
比较期待GPU上AI加速的推广和使用...
看了那个4000飞机的AI demo...牛X的...
回复 支持 反对

使用道具 举报

28#
发表于 2009-9-17 17:40 | 只看该作者
NVIDIA:DX11没实际意义!骗小孩子的

NVIDIA在日前的一次财政分析会议上表示,新一代DirectX 11应用程序接口不会对显卡的销量起到促进作用,而GPGPU显卡通用计算能力、NV独有的工具和基于这些技术的相关软件才 ...
tft1122 发表于 2009-9-17 16:17



转帖缺乏水平就没啥意义了。

http://www.xbitlabs.com/news/vid ... Graphics_Cards.html

Nvidia: DirectX 11 Will Not Catalyze Sales of Graphics Cards.

DirectX 11 Is Not the Defining Reason to Invest into New Graphics Cards – Nvidia

[09/16/2009 02:03 PM]
by Anton Shilov
Nvidia Corp. said during a conference for financial analysts that the emergence of next-generation DirectX 11 application programming interface will not drive sales of graphics cards. The firm believes that general purpose computing on graphics processing units (GPGPU) as well as its proprietary tools and emergence of software taking advantage of these technologies will be a better driver for sales of graphics boards than new demanding video games and high-end cards.

DirectX 11 - Not Important
“DirectX 11 by itself is not going be the defining reason to buy a new GPU. It will be one of the reasons. This is why Microsoft is in work with the industry to allow more freedom and more creativity in how you build content, which is always good, and the new features in DirectX 11 are going to allow people to do that. But that no longer is the only reason, we believe, consumers would want to invest in a GPU,” said Mike Hard, vice president of investor relations at Nvidia, at Deutsche Bank Securities Technology Conference on Wednesday.

Nvidia believes that special-purpose software that relies on GPGPU technologies will drive people to upgrade their graphics processing units (GPUs), not advanced visual effects in future video games or increased raw performance of DirectX 11-compliant graphics processors.

“Now, we know, people are doing a lot in the area of video, people are going to do more and more in the area of photography… I think that the things we are doing would allow the GPU to be a co-processor to the CPU and deliver better user experience, better battery life and make that computers little bit more optimized,” added Mr. Hara.

There are several problems for Nvidia, though. While ATI, graphics business unit of Advanced Micro Devices, is about to launch its Radeon HD 5800-series graphics cards that fully support DirectX 11 in the upcoming weeks, Nvidia yet has not disclosed any plans regarding its DX11 GPUs, which means that in the eyes of computer enthusiasts the company is not a technology leader any more.

Moreover, the software that takes advantage of Nvidia’s proprietary CUDA GPGPU technology is, in many cases, incompatible with open-standard OpenCL and DirectCompute 11 (DirectX 11 compute shaders) environments, which are supported by ATI Radeon HD 4000 and 5000 families of graphics processors in addition to Nvidia’s latest chips. Even though Nvidia has advantage in terms of higher amount of installed GeForce GPUs and at least some makers of software will decide to make CUDA software, the majority will settle with industry-standard DirectCompute and OpenCL, which puts all the interested parties – ATI/AMD, Intel, Nvidia, etc. – into the same boat, where there will be no advantage of exclusive software. It is not completely clear why Nvidia is trying to downplay the importance of DirectX 11, DirectCompute 11, technologies that enable next-generation software.

Computing Performance More Important than Graphics Performance
Next-generation graphics processors will naturally not only outperform Nvidia’s and ATI current GeForce GTX 200- and Radeon HD 4000-series lines, but also offer support for future games, something, which is more than likely to catalyze many gamers – who usually buy high-end graphics cards for $300 or more – to upgrade their graphics sub-systems. The new graphics cards will allow to increase resolutions of video gaming and increase the amount of enabled visual effects.

Nvidia believes that in future computing performance will matter much more than graphics performance, which seems to make sense as forthcoming video games will demand a lot of purely computing power to process not only visuals, but also physics and artificial experience. Nevertheless, Nvidia seems to put a lot of hopes onto its proprietary technologies, such as CUDA, Stereo 3D Vision, PhysX and others. This is understandable as the aforementioned technologies allow Nvidia to differentiate itself. However, as all proprietary standards (3dfx Glide is one example), they may not continue to be on the leading edge in the longer term.

“Graphics industry, I think, is on the point that microprocessor industry was several years ago, when AMD made the public confession that frequency does not matter anymore and it is more about performance per watt. I think we are the same crossroad with the graphics world: framerate and resolution are nice, but today they are very high and going from 120fps to 125fps is not going to fundamentally change end-user experience. But I think the things that we are doing with Stereo 3D Vision, PhysX, about making the games more immersive, more playable is beyond framerates and resolutions. Nvidia will show with the next-generation GPUs that the compute side is now becoming more important that the graphics side,” concluded Mr. Hara.


在这个原文中,NVIDIA 完全没有任何 DX11 无用论的论点,只是说 DX11 不再是唯一的购买新卡因素,而是其中的一个因素,这样的说法,换做 AMD 或者其他图形芯片厂商也会是类似的说法,否则 AMD 不会花力气弄 Eyefinite 以及其他和 DX11 没有关系的技术。

事实上 NVIDIA 在这里说的 computing 部分,本身就是 DX11 的新增特性,我不认为这样说法会构成 DX11 无用论。
回复 支持 反对

使用道具 举报

westlee 该用户已被删除
29#
发表于 2009-9-17 18:20 | 只看该作者
提示: 作者被禁止或删除 内容自动屏蔽
回复 支持 反对

使用道具 举报

30#
发表于 2009-9-17 20:11 | 只看该作者
转帖缺乏水平就没啥意义了。

http://www.xbitlabs.com/news/video/display/20090916140327_Nvidia_DirectX_11_Will_Not_Catalyze_Sales_of_Graphics_Cards.html

Nvidia: DirectX 11 Will Not Catalyze S ...
Edison 发表于 2009-9-17 17:40


话说,我从来不以dx作为买显卡的依据,ogl才是
回复 支持 反对

使用道具 举报

31#
发表于 2009-9-17 20:18 | 只看该作者
等到amd想开发的时候,就会发现所谓的通用标准,其实都是按着nv的卡去写的。
westlee 发表于 2009-9-17 19:20


在游戏领域还落后包括性能和驱动的情况下还是先搞好主业吧!
回复 支持 反对

使用道具 举报

32#
发表于 2009-9-17 20:19 | 只看该作者
呵呵,静观其变
回复 支持 反对

使用道具 举报

33#
发表于 2009-9-17 20:52 | 只看该作者
以下是我几点推论:

1:DX11无用论。很少听见NV有这种说法,新技术无用论?这样岂不是得罪了微软?也许最大的可能是GT300明年第一季才出,想给GT200清库存而已。

2:天量的晶体管和雷人的制程。从GT300的参数看 ...
goodayoo 发表于 2009-9-17 12:51


第一条,原文出处是哪里?
回复 支持 反对

使用道具 举报

头像被屏蔽
34#
发表于 2009-9-18 00:38 | 只看该作者
提示: 作者被禁止或删除 内容自动屏蔽
回复 支持 反对

使用道具 举报

35#
发表于 2009-9-18 00:40 | 只看该作者
等GPU能把串行数据处理得跟CPU一样快的时候,Intel才会开始正视GPU

在此之前,都是藐视+俯视+斜视
回复 支持 反对

使用道具 举报

36#
发表于 2009-9-18 02:01 | 只看该作者
无用论是论坛的几个神人提出来的,nv自己没说过~··

单就对一般游戏用户而言,跑的快,效果能开的显卡就是好的。当然毕竟dx11游戏不可能大量普及,但噱头已经有了~~~~~~在gt300难产的时候,马甲战术还是很有效的办法,这点凭借的是nv的软件优势
回复 支持 反对

使用道具 举报

37#
发表于 2009-9-18 10:41 | 只看该作者
对于未知的猜测 算YY 猜对了  Yy成功 错了 本身就是YY
回复 支持 反对

使用道具 举报

38#
发表于 2009-9-18 11:17 | 只看该作者
等GPU能把串行数据处理得跟CPU一样快的时候,Intel才会开始正视GPU

在此之前,都是藐视+俯视+斜视
yamhill 发表于 2009-9-18 00:40



以现在硅工业技术,根本不可能用一种运算构架来适应所有运算,GPU和CPU各有优势,谁也取代不了谁
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 注册

本版积分规则

广告投放或合作|网站地图|处罚通告|

GMT+8, 2025-8-28 14:08

Powered by Discuz! X3.4

© 2001-2017 POPPUR.

快速回复 返回顶部 返回列表