Author

Topic: supercomputer GPUs vs consumer GPUs? (Read 1874 times)

hero member
Activity: 602
Merit: 500
June 11, 2011, 03:11:53 PM
#8
You might be looking at about 100MHash/sec per card? Not sure. But that's about the equivalent mhash of an old 4850 ($50 AMD)
newbie
Activity: 42
Merit: 0
June 11, 2011, 03:10:04 PM
#7
Hmmm, are nvidias really that bad?

I have the opportunity to use a setup with 22 GB RAM, 2 x NVIDIA Tesla “Fermi” M2050 GPU. Is there really no way to get any kind of comparable mining performance from that compared to just any old ATI card?

they Good for rendering, ie in games.
esp in complex scenes with cool light and huge viewdistance.
but for GPGPU usage they sucks.
and stay in this biz, only because ATI/AMD doing it wrong.
newbie
Activity: 4
Merit: 0
June 11, 2011, 03:07:02 PM
#6
Hmmm, are nvidias really that bad?

I have the opportunity to use a setup with 22 GB RAM, 2 x NVIDIA Tesla “Fermi” M2050 GPU. Is there really no way to get any kind of comparable mining performance from that compared to just any old ATI card?
newbie
Activity: 42
Merit: 0
June 11, 2011, 03:00:20 PM
#5
using supers for SP computation is joke, sure. watch for usual LINPACK/LIVERMORE lib usage, for example Wink
even DP isn't enough anymore and new, quad-presision, FP-format introduced both for Fortran[in 2003]and other languages.
hero member
Activity: 602
Merit: 500
June 11, 2011, 02:56:29 PM
#4
Is OP asking about use of these cards for mining or for general usage? No one mines on nvidia cards regardless of their specs.

In general however people use the supercomputer GPUs for non-single precision calcs.
member
Activity: 84
Merit: 10
June 11, 2011, 02:53:30 PM
#3
No one serious is using an nVidia to mine.  There's no point, nVidia's blow at mining compared to ATI cards of the same cost.

SHA256 is an Integer operation, which ATI cards excel at doing.
nVidia cards are better at floating point operations, which mapping polygons on a screen requires lots of...

Basically, neither of those cards are great at mining.
newbie
Activity: 42
Merit: 0
June 11, 2011, 02:53:13 PM
#2
Single Precision shaders==fail.
scalar arch==fail.
so, in short, nothing interesting in NVidia, except easy-to-start/use SDK.
but this rolled around proprietary/slowly-advancing API, called CUDA, which isn't point to invest developing around/for.
newbie
Activity: 4
Merit: 0
June 11, 2011, 02:38:56 PM
#1
So take a look at the

http://www.nvidia.com/docs/IO/105880/DS_Tesla-M2090_LR.pdf
M2050:
1030 single precision GFLOPs
448 CUDA cores
3 GB GDDR5
148 GB/s memory bandwidth
cost: $2400

vs.

http://www.geeks3d.com/20110324/nvidia-geforce-gtx-590-officially-launched-specifications-and-reviews-hd-6990-is-still-faster/
GeForce GTX 590
2486 single precision GFLOPs
1024 CUDA cores
3 GB GDDR5
327 GB/s memory bandwidth
cost: $699

So why do people use supercomputer GPUs? They seem worse in every way specification wise. Is there any way it'd be better than the consumer one for bitcoin mining?


Jump to: