Author

Topic: Cray Unveils Its First GPU Supercomputer (Read 2945 times)

member
Activity: 92
Merit: 10
NEURAL.CLUB - FIRST SOCIAL ARTIFICIAL INTELLIGENCE
May 25, 2011, 12:06:11 PM
#15
This has 50 petaflops power.
Considering they say it may scale up to that you should ask for a quote on that configuration. I see a card at $3K for a tflop, so that would make it $150M, that was a one off price for a PC card though but would still assume it would be in the tens of millions so if you had that much cash it would be better spent on GPUs or an ASIC for mining.

Again, much cheaper to simply hire a team to find people with most bitcoins and extort them.
full member
Activity: 294
Merit: 100
This has 50 petaflops power.
Considering they say it may scale up to that you should ask for a quote on that configuration. I see a card at $3K for a tflop, so that would make it $150M, that was a one off price for a PC card though but would still assume it would be in the tens of millions so if you had that much cash it would be better spent on GPUs or an ASIC for mining.
newbie
Activity: 42
Merit: 0
This has 50 petaflops power.

More than 40 petaflops of whole Bitcoin network.

This could kill this network in a matter of seconds !

Nobody is invincible, including Bitcoin !
trick/lie is lied in HOW those "Petaflops" measured Wink
hardly such thing reach this rate on general-purpose math and even less likely its done with double precision.
and both make this "Super" pointless, compared to BitCoin network.
hero member
Activity: 518
Merit: 500
This has 50 petaflops power.

More than 40 petaflops of whole Bitcoin network.

This could kill this network in a matter of seconds !

Nobody is invincible, including Bitcoin !
newbie
Activity: 42
Merit: 0
basicly because AMD not provide virtually NO support for GPGPU developers[both software and hardware] for years !!
even datasheets findings is troublesome[and many things undisclosed yet :/].
and Nvidia actually co-operate with devlopers and even co-assist in development itself Tongue
i whish AMD start investing little more in building healthy eco-system around their fellow GPGPU developers/enthusisasts :-|
starting from improving OpenCL related SDK, expanding devs support division and helping in making 1st GPGPU steps.
member
Activity: 92
Merit: 10
NEURAL.CLUB - FIRST SOCIAL ARTIFICIAL INTELLIGENCE
Hmm, or is it because Nvidia's CUDA language is easier to code with?
My guess is that it's because floating point is more useful for most simulations. I'm sure someone will be able to point out exceptions, but the only two mainstream research applications I've got that support a GPUs are a GIS application and an RF (radio frequency) simulation package where floating point is a natural fit. I'd guess most physics and "real world" simulations would fall into the same category.

The only two GPU applications I've got where integer is better are Bitcoin mining and a password cracker. Outside a few government agencies and security researchers it's problably not the sort of things that many people throw a lot of money at.

Lol good point, on both of the integer applications if you have significant funds or influence you can simply use the much cheaper solution of interrogation to get the password or hire a criminal and steal someones bitcoins.  For floating point its impossible to influence a weather pattern or cell tower via political power  Grin
full member
Activity: 294
Merit: 100
Hmm, or is it because Nvidia's CUDA language is easier to code with?
My guess is that it's because floating point is more useful for most simulations. I'm sure someone will be able to point out exceptions, but the only two mainstream research applications I've got that support a GPUs are a GIS application and an RF (radio frequency) simulation package where floating point is a natural fit. I'd guess most physics and "real world" simulations would fall into the same category.

The only two GPU applications I've got where integer is better are Bitcoin mining and a password cracker. Outside a few government agencies and security researchers it's problably not the sort of things that many people throw a lot of money at.
legendary
Activity: 3080
Merit: 1080
Bah, why didn't they partner up with AMD? They're already a big purchaser of AMD Opteron chips, so why not poke AMD to develop a similar solution like the Teslas. Multiple Cayman cores on a PCB anyone? Hmm, or is it because Nvidia's CUDA language is easier to code with?

full member
Activity: 336
Merit: 100
I thought their "non-gpu" supercomputer were already hitting petaflops? So wouldn't that be faster.
legendary
Activity: 3878
Merit: 1193
ROFL. Cray jumped the shark when they decided to use Windows on their HPCs.
newbie
Activity: 14
Merit: 0
Previously, the only difference between certain Tesla models and certain consumer GeForce cards was the lack of video connectors and a (presumably) more stringent binning process to ensure Tesla parts were stable within specific tolerances for 24/7 use.

newbie
Activity: 42
Merit: 0
Nvidia Tesla inside ?
no double-precision math support ?
what kind of "super" it would be ? :/
home super ? :| like Octane 3 ?
while new Fortran-2003 -adapted libs enjoy quad-precision FP math :-/

p.s.
finally Cray jumps to hybrid supers bandvagon :/
noticing five 5 most powerful supers [hybrid]nature.

p.p.s.
probably they scrap their FPGA/Vector modules development soon.
full member
Activity: 294
Merit: 100
I noticed in the article the GPUs are Tesla and while I'm not familiar with that model presumably they are better for floating point than integer the same as the consumer Nvidia cards.
member
Activity: 61
Merit: 10
A 5850 does 2 terraflops (or more), so assuming 300mhash/s for each 5850 I estimate 10.5ghash/s...
member
Activity: 92
Merit: 10
NEURAL.CLUB - FIRST SOCIAL ARTIFICIAL INTELLIGENCE
Jump to: