Author

Topic: Preliminary GTX 680 Test Results - Tom's Hardware Forwarded to Xtreme Systems (Read 3331 times)

sr. member
Activity: 402
Merit: 250
one of the reviews showed ~80% performance for AES encrypt decrypt cycle with 8k*8k image, while at maximum load GTX680 was using 1/6th less energy (system total).
*IF*, and that's pretty huge if, this translates to exactly the mining performance, the question remains how well does GTX680 undervolt and downclock, if it does this 15% better, then GTX680 *might* be the goto card for new high end systems.

It seems that 7970 will still remain 15-20% better at efficiency. We can only wait and see for first mining results - plus then someone probably optimizing mining code for that card.
If the change to CUDA code from opencl truly increases 10-20% then GTX680 might actually be better, if the AES test was using OpenCL.

TFLOPS count reflects those figures.

sr. member
Activity: 310
Merit: 250
I knew there was no 4608 shader card from EITHER vendor... frankly that was absurd.

No hot-clocks on the 680 was surprising... but this is a more power efficient card even with 3x shaders than the 580.

Two things I want to share with you guys...

A) The GTX680 is a better card AT GAMING then the 7970

B) Due to A) the 7970 is overpriced compared to the competition, so we'll see a price drop shortly.

Still, I'm glad bought a 7970 (even tho it is less of a gaming card,) and I'm currently aiming for 5 7990's when they get released as my mining card of choice.

Good times ahead!!!!

member
Activity: 92
Merit: 10
What people are saying in the reviews is that you can't go over specific powerdraw which is about ~+30%(up to 250W). So the card can't go past 1200Mhz-1300Mhz depending on load. GPU boost seems to look tricky shuffle between temperature, voltage and coreclock. GPU boost impossible to disable?... hope not. Memory clock isn't changed.

Hopefully MSI/ASUS will release their custom cooler cards with 6+8 pin : )

Vbs
hero member
Activity: 504
Merit: 500
Guys, DirectCompute and OpenCL benchmarks are really pointless for nVidia cards atm.

Both are direct competitors to CUDA, so nVidia does what it can to "ignore" them. I've seen at times CUDA be 20-30% faster that the "same" code in OpenCL, because they don't care as much for optimizing their OpenCL compiler as they care about their precious CUDA compiler, or their new OpenACC initiative.
hero member
Activity: 518
Merit: 500
Hello this is GPGPU, not gaming. You have a common algorithm, not some binary blob that vendors create application profiles for by introducing all kind of tricks to improve performance. No miracles and no marketing gibberish, just simple maths. You have 1536 shaders clocked at 1008 versus 2048 shaders clocked at 925. With GCN, both architectures are now scalar and comparisons are even easier because bitcoin employs  a simple algorithm that is extremely ALU-bound and not memory-intensive and does not involve lots of branching. In the best case where NVidia implemented bitwise rotations and bitselect, difference performance-wise would be ~22% in favor of 7970. And this would also likely require a rewrite of the NVidia miners. As far as mining is concerned and as far as the TDP and prices announced until now are correct, there is no way 680 becomes a better alternative to 7970. Not even close.


What about the card that is supposed to have 2304 shaders ?

GTX 680 is not the flagship I think. If it is then DAMN, Nvidia screwed me again Sad

I really was hoping to go AMD-free this time but it seems like a no go if the GTX 680 is all they have to show for Kepler.

EDIT: http://www.legitreviews.com/news/12673/ Looks like there won't be a Nvidia dual GPU monster with 4608 shaders Sad

Also to note are the TFLOPS power stated for single precision ...

So it seems like a hard choice between GTX 685 and 7990 because the Nvidia dual-GPU will only have 3072 shaders Huh

Maybe this year AMD = best dual GPU with 7990 ( 4096 shaders )
                      Nvidia = best single GPU with GTX 685 ( 3072 shaders )

all this for mining purposes. Am I mad or what Grin
hero member
Activity: 533
Merit: 500

  I wasn't sure of the exact math but thanks for explaining things.

  Never said that this card would magically make Nvidia the go-to card / architecture for mining.  Just see that this potentially may finally be a card that games great and happens to mine "ok".  Obviously the choice is still AMD for any dedicated mining ops.
sr. member
Activity: 256
Merit: 250
Hello this is GPGPU, not gaming. You have a common algorithm, not some binary blob that vendors create application profiles for by introducing all kind of tricks to improve performance. No miracles and no marketing gibberish, just simple maths. You have 1536 shaders clocked at 1008 versus 2048 shaders clocked at 925. With GCN, both architectures are now scalar and comparisons are even easier because bitcoin employs  a simple algorithm that is extremely ALU-bound and not memory-intensive and does not involve lots of branching. In the best case where NVidia implemented bitwise rotations and bitselect, difference performance-wise would be ~22% in favor of 7970. And this would also likely require a rewrite of the NVidia miners. As far as mining is concerned and as far as the TDP and prices announced until now are correct, there is no way 680 becomes a better alternative to 7970. Not even close.
hero member
Activity: 518
Merit: 500
Quote from: The-Real-Link
(I will be picking one up either way).  At least for gaming and power, they do very well!

  It would seem that until we mine, it's still up in the air.  The 580 gets roughly 140-150 MH/sec or so according to the hardware charts and personal use I've done with it but it doesn't show up on these compute hashing charts at all.  If a 7970 is in the 600s MH/sec range and we simply divide, it would actully appear that the 680 would mine even worse than the 580.  The architecture however is still different so I can't see it doing worse (many shaders at a high clock).  If anyone else wants to shine some light here it'd be welcomed Wink

  Thanks to Olivon for posting the data.

Please use the latest CUDA miner and do tell use what performance you get.

I am waiting every day for something like this for a long time now ...

Thanks !

Any good programmers want to optimize the code for the Kepler arch Huh



Sure.  I'll post my results once I can get a card.  Will of course do stock and potential OC once I learn what is best for the new card in terms of temps and safety.

Can't wait for them !

Maybe anyone talented can see if the Kepler can pwn some AMD ass ?

Really sick of AMD and their messed up drivers Angry
hero member
Activity: 533
Merit: 500
Quote from: The-Real-Link
(I will be picking one up either way).  At least for gaming and power, they do very well!

  It would seem that until we mine, it's still up in the air.  The 580 gets roughly 140-150 MH/sec or so according to the hardware charts and personal use I've done with it but it doesn't show up on these compute hashing charts at all.  If a 7970 is in the 600s MH/sec range and we simply divide, it would actully appear that the 680 would mine even worse than the 580.  The architecture however is still different so I can't see it doing worse (many shaders at a high clock).  If anyone else wants to shine some light here it'd be welcomed Wink

  Thanks to Olivon for posting the data.

Please use the latest CUDA miner and do tell use what performance you get.

I am waiting every day for something like this for a long time now ...

Thanks !

Any good programmers want to optimize the code for the Kepler arch Huh



Sure.  I'll post my results once I can get a card.  Will of course do stock and potential OC once I learn what is best for the new card in terms of temps and safety.
hero member
Activity: 518
Merit: 500
Quote from: The-Real-Link
(I will be picking one up either way).  At least for gaming and power, they do very well!

  It would seem that until we mine, it's still up in the air.  The 580 gets roughly 140-150 MH/sec or so according to the hardware charts and personal use I've done with it but it doesn't show up on these compute hashing charts at all.  If a 7970 is in the 600s MH/sec range and we simply divide, it would actully appear that the 680 would mine even worse than the 580.  The architecture however is still different so I can't see it doing worse (many shaders at a high clock).  If anyone else wants to shine some light here it'd be welcomed Wink

  Thanks to Olivon for posting the data.

Please use the latest CUDA miner and do tell use what performance you get.

I am waiting every day for something like this for a long time now ...

Thanks !

Any good programmers want to optimize the code for the Kepler arch Huh

hero member
Activity: 533
Merit: 500
  Hi everyone,

  Caught this in a couple different places but didn't see a post yet.

  Since I'm sure Nvidia's NDA forced Tom's to take their preliminary test reviews down, you can view them at Xtreme systems as someone loaded them there.

  http://www.xtremesystems.org/forums/showthread.php?277763-%93Kepler%94-Nvidia-GeForce-GTX-780&p=5071447&viewfull=1#post5071447

  Of particular note are the Compute graphs:

  

  

  It would appear that while the GTX 580 doesn't make the chart at all, the 680 does.  It is overshadowed however by the 79xx cards by at least 2x to 4-5x depending on the encryption and all that.

  Is there any way to infer how well it'll mine from these charts or do we just have to wait until they hit retail (I will be picking one up either way).  At least for gaming and power, they do very well!

  It would seem that until we mine, it's still up in the air.  The 580 gets roughly 140-150 MH/sec or so according to the hardware charts and personal use I've done with it but it doesn't show up on these compute hashing charts at all.  If a 7970 is in the 600s MH/sec range and we simply divide, it would actully appear that the 680 would mine even worse than the 580.  The architecture however is still different so I can't see it doing worse (many shaders at a high clock).  If anyone else wants to shine some light here it'd be welcomed Wink

  Thanks to Olivon for posting the data.
Jump to: