Author

Topic: Possible leaked picture of Radeon 7990? (Read 3912 times)

donator
Activity: 1218
Merit: 1079
Gerald Davis
April 16, 2012, 12:26:58 PM
#15
But even then, that is still using PCIe lanes. Wonder if a custom protocol on a direct, short link would be even faster (think HyperTransport, or its equivalents).

PCIe 3.0 with 16 lanes is 16GB/s bidirectional non blocking.  What scenario's would require >32GB/s?  For those 0.0000000000000000000000000000001% of the time does the cost of a proprietary solution outweigh the simplicity and economies of scale in using off the shelf components?
Good points. For sure, it wouldn't apply in the slightest to bitcoin mining. But all kinds of odd stuff is needed for other GPU compute applications. What if someone decided to fit one SHA256 core on each GPU, and force them to communicate with each other in order to have a double SHA256 needed for Bitcoin? That would use a lot of bandwidth, although I don't know whether it would use the GPU resources better/faster or not.

It all comes down to cost vs benefit.

The bridge chip is actually "not" required.

PCIe spec differentiates ports from lanes.  It is "possible" to have a PCIe controller route lanes to two ports on the same expansion slot.  Support is essentially non-existent.  Most (all?) motherboards assume 1 slot = 1 port.  If a dual GPU used that feature it either wouldn't work or only 1 GPU would be usable in non-compliant motherboards.

If at some point in the future most motherboards supported routing 2+ ports to the same physical slot then there would be no need to use a bridge chip.  Each GPU would simply connect via 8 lanes.  One irony is that those GPU wouldn't be usable PCIe1x slots (i.e. extenders).
full member
Activity: 126
Merit: 100
April 16, 2012, 12:26:51 PM
#14
It is most definitely a fake image. As far as I know, the 7990 is supposed to have a PLX PEX 8747 (Gen3 48 lanes, 5 ports) PCI-E Switch on the board, probably be on the front side no?

(48 lanes...just imagine 3 GPU's...)
legendary
Activity: 1274
Merit: 1004
April 16, 2012, 12:22:11 PM
#13
If you actually look at it zoomed in, it is pretty obviously faked. For instance, look where the PCIe pairs enter into the GPU on the left. That seems pretty normal. Now look at the bottom of the GPU on the right, the same traces enter the GPU as on the left despite the fact that there isn't the PCIe lanes coming in, and they just abruptly end at those GDD5 chips.
Also, on the top left side of the left GPU you can see the fanout of traces going to the two RAM modules. On the right GPU, the fanout is exactly the same, but it just ends at a wall of ceramic caps.

Having the GPU do the switching isn't a terrible idea, but it would be a whole redesign of the Tahiti die unless a sideport is built in. I haven't heard of them bringing that back, so I assume if they wanted to do something like this they'd have to redesign Tahiti and that's just not going to happen for a low volume part like this.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
April 16, 2012, 12:20:00 PM
#12
But even then, that is still using PCIe lanes. Wonder if a custom protocol on a direct, short link would be even faster (think HyperTransport, or its equivalents).

PCIe 3.0 with 16 lanes is 16GB/s bidirectional non blocking.  What scenario's would require >32GB/s?  For those 0.0000000000000000000000000000001% of the time does the cost of a proprietary solution outweigh the simplicity and economies of scale in using off the shelf components?
Good points. For sure, it wouldn't apply in the slightest to bitcoin mining. But all kinds of odd stuff is needed for other GPU compute applications. What if someone decided to fit one SHA256 core on each GPU, and force them to communicate with each other in order to have a double SHA256 needed for Bitcoin? That would use a lot of bandwidth, although I don't know whether it would use the GPU resources better/faster or not.
donator
Activity: 1218
Merit: 1079
Gerald Davis
April 16, 2012, 12:17:13 PM
#11
But even then, that is still using PCIe lanes. Wonder if a custom protocol on a direct, short link would be even faster (think HyperTransport, or its equivalents).

PCIe 3.0 with 16 lanes is 16GB/s bidirectional non blocking.  What scenario's would require >32GB/s?  For those 0.0000000000000000000000000000001% of the time does the cost of a proprietary solution outweigh the simplicity and economies of scale in using off the shelf components?
donator
Activity: 1218
Merit: 1079
Gerald Davis
April 16, 2012, 12:14:38 PM
#10
I always wondered why they did that clumsy bodge with the PCIe switch on dual-gpu cards. It would only make sense for the GPUs to communicate with each other directly, and perhaps to even show up as a single device. I suppose the switch would be easier to design in, and wouldn't need to have special GPU support, but it just seems natural to delete it.

It simplifies drivers and OS interaction.

It is two complete, independent, and indentical GPUs (from OS point of view).

If you only had no PCIe switch then either
a) you need some proprietary switch to route I/O to proper GPU.

b) have one GPU be the "master" and the other GPU the "slave".  Only one GPU is connected to PCIe port and relays I/O to the second GPU.

Neither is really a good solution.  I don't see the switch as clumsy.  You have two independent components on a single board which need access to the CPU via a single PCIe port.  1 Port -> switch -> 2 ports each with a GPU connected.

I don't think it will ever be "deleted".  Remember 7990s are simply two 7970s.  There would be no need for 7970 to interface w/ PCIe adapter/controller via a proprietary solution. 
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
April 16, 2012, 12:13:14 PM
#9
There are boards that communicate directly from PCI-E to GPU without CFX bridge.  MSI (as well as Asus) made/designed a board or two a few years back using two mobile Radeons, each using 8 of the 16 PCI-E channels through older MXM slots. Asus actually sold one with three MXM slots for triple CFX in one PCI-E board. Nevertheless mobile GPU may not be the best choice for many obvious reasons, cost being perhaps the major one.

MSI Germanium http://adler-pc.com/news/msi.mxm03b.jpg
But even then, that is still using PCIe lanes. Wonder if a custom protocol on a direct, short link would be even faster (think HyperTransport, or its equivalents).
full member
Activity: 126
Merit: 100
April 16, 2012, 12:08:47 PM
#8
There are boards that communicate directly from PCI-E to GPU without CFX bridge.  MSI (as well as Asus) made/designed a board or two a few years back using two mobile Radeons, each using 8 of the 16 PCI-E channels through older MXM slots. Asus actually sold one (Trinity) with three MXM slots for triple CFX in one PCI-E board. Nevertheless mobile GPU may not be the best choice for many obvious reasons, cost being perhaps the major one.

MSI Germanium http://adler-pc.com/news/msi.mxm03b.jpg

rjk
sr. member
Activity: 448
Merit: 250
1ngldh
April 16, 2012, 11:43:59 AM
#7
That's the first thing I noticed too. I supposed a PCIe switch could be on the backside, but it really looks like all the diff pairs go to the first GPU. Either it's a really good fake, or one of the GPUs handles the PCIe switching.
I always wondered why they did that clumsy bodge with the PCIe switch on dual-gpu cards. It would only make sense for the GPUs to communicate with each other directly, and perhaps to even show up as a single device. I suppose the switch would be easier to design in, and wouldn't need to have special GPU support, but it just seems natural to delete it.
legendary
Activity: 1274
Merit: 1000
April 16, 2012, 11:39:36 AM
#6
Doesn't AMD always have on GPU upside down?

And is it just me or is the pink capacitor below the right GPU a bit flat on top? Looks a little cut off via photoshop. 

legendary
Activity: 1274
Merit: 1004
April 15, 2012, 10:54:26 PM
#5
One would think the memory chips would be fewer and denser. The VRM appears somewhat replicated, the lack of a CFX controller, and 2 SLI connectors...hmmm

It's definitely fake in my mind. Looks at the screw hold between the memory chips. For one, having it that close to the BGAs would be really poor design anyway, but in this case the hole is so close it actually eats part of the chip.
full member
Activity: 126
Merit: 100
April 15, 2012, 10:28:09 PM
#4
One would think the memory chips would be fewer and denser. The VRM appears somewhat replicated, the lack of a CFX controller, and 2 SLI connectors...hmmm
hero member
Activity: 504
Merit: 500
April 15, 2012, 10:27:42 PM
#3
posted " 01.04.2012, 10:54"

april fools?
legendary
Activity: 1274
Merit: 1004
April 15, 2012, 10:25:34 PM
#2
That's the first thing I noticed too. I supposed a PCIe switch could be on the backside, but it really looks like all the diff pairs go to the first GPU. Either it's a really good fake, or one of the GPUs handles the PCIe switching.
full member
Activity: 126
Merit: 100
April 15, 2012, 10:02:22 PM
#1
This is eye catching: (Would there not be a PCI-E controller?)

http://www.tweakpc.de/forum/ati-grafikkarten/86500-erstes-bild-radeon-hd-7990-aufgetaucht.html

Jump to: