POPPUR爱换

 找回密码
 注册

QQ登录

只需一步,快速开始

手机号码,快捷登录

搜索
12
返回列表 发新帖
楼主: asdfjkl
打印 上一主题 下一主题

自从传闻390X用4G显存后,对显存大小的认识就有些变化了。。。

  [复制链接]
21#
发表于 2015-3-23 13:11 | 只看该作者
asdfjkl 发表于 2015-3-22 22:43
你这个idea还是不错的,但实施起来有好几块芯片要做。
GPU不带MC或者带MC的,之前的那种;(芯片1)
专 ...

我这个思路完全是方便双芯而已,专门新弄个“不带MC的GPU”就免了;所以如果是单芯卡,不必出现“芯片2”的。

这个方案仅限于单卡双芯,物理上看让“数块GPU”在PCB上好像不太现实?延迟问题,我相信你可以用12GHz的GDDR5弥补~

这个普通GPU怎么会多1组接口呢?原MC接出的连线通往芯片2,但GPU和芯片2是分开的呀,要算面积也是分开的吧;好比Tesla时期G200核心和NVIO2芯片那样分开,对良率的影响就没这么大了。
回复 支持 反对

使用道具 举报

22#
发表于 2015-3-23 13:15 | 只看该作者
iamw2d 发表于 2015-3-22 23:07
hmc可以这么连

同一显存颗粒能同时被不止1颗GPU读写信息吗?
回复 支持 反对

使用道具 举报

23#
发表于 2015-3-23 17:32 | 只看该作者
Xenomorph 发表于 2015-3-23 13:15
同一显存颗粒能同时被不止1颗GPU读写信息吗?

Multiple HMC devices may be chained together to increase the total memory capacity available to a host. A network of up to eight HMC devices and 4 host source links is supported. Each HMC in the network is identified through the value in its CUB field, located within the request packet header. The host processor must load routing configuration information into each HMC. This routing information enables each HMC to use the CUB field to route request packets to their destination. Each HMC link in the cube network is configured as either a host link or a pass-through link, depending upon its position within the topology. See Figure 5 and Figure 6 for illustration. A host link uses its link slave to receive request packets and its link master to transmit response packets. After receiving a request packet, the host link will either propagate the packet to its own internal vault destination (if the value in the CUB field matches its programmed cube ID) or forward it towards its destination in another HMC via a link configured as a pass-through link. In the case of a malformed request packet whereby the CUB field of the packet does not indicate an existing CUBE ID number in the chain, the request will not be executed, and a response will be returned (if not posted) indicating an error. A pass-through link uses its link master to transmit the request packet towards its destination cube, and its link slave to receive response packets destined for the host processor.
An HMC link connected directly to the host processor must be configured as a host link in source mode. The link slave of the host link in source mode has the responsibility to generate and insert a unique value into the source link identifier (SLID) field within the tail of each request packet. The unique SLID value is used to identify the source link for response routing. The SLID value does not serve any function within the request packet other than to traverse the cube network to its destination vault where it is then inserted into the header of the corresponding response packet. The host processor must load routing configuration information into each HMC. This routing information enables each HMC to use the SLID value to route response packets to their destination. Only a host link in source mode will generate a SLID for each request packet. On the opposite side of a pass-through link is a host link that is NOT in source mode. This host link operates with the same characteristics as the host link in source mode except that it does not generate and insert a new value into the SLID field within a request packet. All link slaves in pass-through mode use the SLID value generated by the host link in source mode for response routing purposes only. The SLID fields within the request packet tail and the response packet header are considered "Don’t Care" by the host processor. See Figure 5 for supported multi-cube topologies. Contact Micron for guidance regarding feasibility of all other topologies. In the following figures, the link arrows show the direction of requests from the host(s). Responses will travel in the opposite direction on the same link.

没看出来 到底可不可以读写同一块 我猜是可以的


本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有帐号?注册

x
回复 支持 反对

使用道具 举报

24#
发表于 2015-3-24 21:50 | 只看该作者
iamw2d 发表于 2015-3-23 17:32
Multiple HMC devices may be chained together to increase the total memory capacity available to a  ...

刚看了下,应该能被2个MC读写;但好像没提及能否“同时”被读写,而不能的话顶多充当1个容量自适应的显存~
回复 支持 反对

使用道具 举报

25#
发表于 2015-3-25 17:58 | 只看该作者
Xenomorph 发表于 2015-3-24 21:50
刚看了下,应该能被2个MC读写;但好像没提及能否“同时”被读写,而不能的话顶多充当1个容量自适应的显存 ...

外置MC能同时被两个GPU读写,那就需要增加一个FSB,类似core2时代的多核方式一样。问题是这个FSB的带宽需要跟memory一样大的话,IO的布线数量和功耗也要直追GDDR5了
回复 支持 反对

使用道具 举报

26#
发表于 2015-3-26 21:58 | 只看该作者
fengpc 发表于 2015-3-25 17:58
外置MC能同时被两个GPU读写,那就需要增加一个FSB,类似core2时代的多核方式一样。问题是这个FSB的带宽需 ...

原来如此呀……

那GDDR5采用外置MC统一管理的方法行不?
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 注册

本版积分规则

广告投放或合作|网站地图|处罚通告|

GMT+8, 2024-12-22 09:38

Powered by Discuz! X3.4

© 2001-2017 POPPUR.

快速回复 返回顶部 返回列表