Author

Topic: Why GPU hostile? Why not CPU friendly? (Read 1280 times)

legendary
Activity: 1484
Merit: 1005
July 06, 2012, 12:08:12 PM
#5
AES-256 ASICs can significantly outperform CPUs with AES-NI.  When you don't have to do all the other things a CPU has to do you can stamp a lot of AES-only cores onto relatively little silicon.

The idea of Scrypt is it requires lots of low latency, high bandwidth memory.  It's completely uneconomical on an FPGA.  An ASIC using external DRAM wouldn't be much faster than a CPU with external DRAM.  An ASIC with on-die RAM would be very expensive in terms of silicon and therefore monetary cost.

In other words, Scrypt already is pretty close to a CPU-optimized hash.  GPUs can accelerate it some, but not tremendously because memory is still a bottleneck.  ASIC versions wouldn't come out enough cheaper in $/Hps or J/H to justify the development; at almost any level it's easier to use cheap commodity hardware.  I'm not saying it's impossible; you just won't see the massive speedups like are possible with SHA.

Scrypt would probably benefit from an increased memory requirement.  Adjusting the memory requirement based on difficulty would probably make it an adequate CPU (and to a lesser degree GPU, which are just specialized CPUs after all) friendly hash for the foreseeable future.

Increasing the memory requirement of scrypt will just push it to either the ddr3 on the mobo or the gddr on the graphics card. Obviously the gddr is faster. The reason the current implementation works quickly on cpus is because it fits into the level 2 cache well, which has a bandwidth that is about 30% of that of a gpu.

Writing an encryption algorithm that runs faster on a cpu than a gpu is a much more difficult task than most people realize -- Intel is constantly trying to battle the gpu companies and show that a cpu is at least as fast as a gpu at many parallizable tasks, but has had extensive difficulty actually proving any algorithms behave like this even for well studied algorithms like sorts or pathfinding  algorithms.
member
Activity: 75
Merit: 10
July 06, 2012, 05:30:01 AM
#4
Ok.  That makes sense.  Figured that an idea that simple probably had some substantial underlying flaws.  Thanks for the responses!
hero member
Activity: 728
Merit: 500
165YUuQUWhBz3d27iXKxRiazQnjEtJNG9g
July 06, 2012, 05:09:10 AM
#3
AES-256 ASICs can significantly outperform CPUs with AES-NI.  When you don't have to do all the other things a CPU has to do you can stamp a lot of AES-only cores onto relatively little silicon.

The idea of Scrypt is it requires lots of low latency, high bandwidth memory.  It's completely uneconomical on an FPGA.  An ASIC using external DRAM wouldn't be much faster than a CPU with external DRAM.  An ASIC with on-die RAM would be very expensive in terms of silicon and therefore monetary cost.

In other words, Scrypt already is pretty close to a CPU-optimized hash.  GPUs can accelerate it some, but not tremendously because memory is still a bottleneck.  ASIC versions wouldn't come out enough cheaper in $/Hps or J/H to justify the development; at almost any level it's easier to use cheap commodity hardware.  I'm not saying it's impossible; you just won't see the massive speedups like are possible with SHA.

Scrypt would probably benefit from an increased memory requirement.  Adjusting the memory requirement based on difficulty would probably make it an adequate CPU (and to a lesser degree GPU, which are just specialized CPUs after all) friendly hash for the foreseeable future.
hero member
Activity: 840
Merit: 507
July 06, 2012, 05:03:20 AM
#2
One problem is that at present very few processors (Westmere, Sandy bridge, Ivy Bridge, Bulldozer) support the AES instruction set.
http://en.wikipedia.org/wiki/AES_instruction_set#CPUs_with_AES_instruction_set
member
Activity: 75
Merit: 10
July 06, 2012, 04:33:04 AM
#1
In looking at different Alt Coins, it seems like many of the popular ones were made in an attempt to be GPU hostile, e.g. by using the Scrypt algorithm.  That algorithm is very memory intensive, so it would be more costly to use FPGAs or ASICs, and GPUs don't get as much of an edge; however, that approach seems doomed to fail.  Maybe I'm just pessimistic, but if there is a strong enough economy in any cryptocurrency, someone will make an ASIC to do it eventually if it is technologically feasible.

Given that, is it reasonable to make an Alt Coin that uses functions already implemented in hardware, such that the time to develop the ASIC has essentially already been spent?  For example, I know that some Intel processors have AES-256 built into them.  Could a block chain be built off of this in essentially the same way that Bitcoin uses SHA or Litecoin uses Scrypt?

Assuming that it is feasible, I see some pros and cons.  On the plus side, miners would have a good, energy efficient use of their CPUs without worrying that people will get a huge portion of the hashing power by using GPUs, FPGAs, or ASICs; I think that the strength of the network lies in the ease of average users contributing to its strength.  On the downside, though, is the fact that this would be another Alt Coin, which would need time for actual adoption.  Also, older CPUs would be almost useless--anything not supporting the specific instruction(s) required would be SOL.  This may also apply to individuals using CPUs from the other team--(e.g. an instruciton supported by only AMD or only Intel).

To put into perspective the benefit of dedicated hardware in a CPU, check out the graph here:

http://www.tomshardware.com/reviews/clarkdale-aes-ni-encryption,2538-5.html

Graph is about half way down the page.  I was looking for a different one, but this illustrates it nicely.  A processor that was outmatched in most arithmetic operations by a factor of about 2 outstripped the i7-870 by a factor of more than 6 when doing a hardware-accelerated task.



Discuss!
Jump to: