Pages:
Author

Topic: defending ahead the p2p nature of bitcoin - blending hashcash & scrypt (Read 13874 times)

hero member
Activity: 714
Merit: 500
Martijn Meijering
Intel is adding new SSE instructions for SHA calculations to their processors. While this will not make CPU mining for its own sake profitable, it may make running a mining process in the background whenever your computer is on for other reasons a sensible thing to do. This should help a bit with keeping Bitcoin distributed. It would be nice if the same thing happened for GPUs too.

New Instructions Supporting the Secure Hash Algorithm on Intel® Architecture Processors
hero member
Activity: 714
Merit: 500
Martijn Meijering
One consideration that I don't recall reading about before just occurred to me: in addition to having separate difficulties for the two hashing functions, we could also have different reward schedules. Depending on how you do it, this could either increase or decrease the potential controversy over a change in the rules, and help avoid a fork, which would be bad for everybody. If the scrypt-based hash didn't get any reward, it might not alienate the ASIC miners, while it would still give those running scrypt a say in the construction of the blockchain. To do this, you might want to adjust the difficulty so that blocks are created twice as fast to keep the BTC generation on the same schedule.
sr. member
Activity: 322
Merit: 250
Very interesting thread.

My background is 20 years of designing CPUs/ASICs/GPUs.

A few comments here:

1) There is no computational problem that you can't design custom ASIC hardware to do faster than a GPU.
Can we make a proof of work based on the mathematical principles used for rendering video games?  GPUs should be pre-optimized to this task.

Half joking.  But seriously?

x86/x64 CPUs are too unspecialised to have an algorithm made for them i would assume
member
Activity: 104
Merit: 10
It's not necessary to prevent ASICs from doing the PoW faster than CPUs / GPUs, just to make sure they don't have more power than the huge installed based of computers bought for other purposes.

Ahhh, I think scrypt is going to turn out pretty well for that purpose. I think the diff for a custom designed scrypt chip is likely going to be much smaller than SHA256.

Oatmo
hero member
Activity: 714
Merit: 500
Martijn Meijering
It's not necessary to prevent ASICs from doing the PoW faster than CPUs / GPUs, just to make sure they don't have more power than the huge installed based of computers bought for other purposes.
member
Activity: 104
Merit: 10
Very interesting thread.

My background is 20 years of designing CPUs/ASICs/GPUs.

A few comments here:

1) There is no computational problem that you can't design custom ASIC hardware to do faster than a GPU. The extent faster is a function of the types of operations used. If you use multiply and divide, the advantage will be less. Using AND/OR/XOR logical functions makes the advantage more. ASICs will always kill the CPU in control, because general purpose code runs tons of instructions for control flow, these all make a few small gates in HW. The more complicated the decisions, in general custom HW will have a bigger advantage.

2) The way to limit ASICs is to have huge memory requirements. This essentially limits all the options at the memory controller. GPUs kill on these APPs because they are optimized around 2 things (1) massive memory bandwidth, and (2) massive numbers of threads. Basically the GPU wants to run 10000 threads at once, and assumes that every memory access is going to miss and go to memory. They are optimized around using all the parallel HW in the GPU. The one thing they suck at is memory latency, so they hide that with large numbers of threads. scrypt is essentially a large memory algorithm, which makes it difficult for ASICs. You can build large memories onto ASICs, but the cost will be probilitively large unless you can run millions of units.

Thinking out load here, the only way that I think you could design an algorithm where it wouldn't be preferred to use GPUs or ASICs, you would need something which has a lot of decisions, large memory, and chaining in the algorithm (scrypt has all these things, but some sort of chaining nonce-nonce, which is really against how the crypto currencies work).

I think a full ASIC implementation will have a unit cost which is much higher than the SHA algorithm in bitcoin, but would still produce an order of magnitude performance improvement over CPUs, but I'm not sure how much better than GPUs. In the end, if it's profitable to do so, people will design custom HW for these tasks. Right now, it doesn't look likely at litecoin's current price, but everything could change. I think if something happened to the bitcoin viability (like corporations coopting it or something like that), then people will switch to litecoin, and then it will be much more profitable.

Oatmo
hero member
Activity: 714
Merit: 500
Martijn Meijering
However again that is not a good ASIC-hard direction because the SIMD nature of AMD GPUs is overcomeable eg http://www.adapteva.com with a MIMD (ie no SIMD restrictions) 28nm 64 core risc CPU and plans for 1024 even 4096 risc cores per chip.  And they are low energy too.

On the up side, they are still general purpose hardware. It might be good to have a set of algorithms, so that each class of PoW hardware has at least one it excels at.
sr. member
Activity: 280
Merit: 257
bluemeanie
hero member
Activity: 714
Merit: 500
Martijn Meijering
In addition to using memory-hard hashing algorithms, would it be useful to investigate choosing a hashing function that requires the hashing core to be of similar complexity as a typical CPU execution unit? I'm thinking of something that uses multiplication, division and modular reduction relative to some large prime number, and elliptic curve group operations rather than the typical rotate, xor and addition modulo 2^n operations. If necessary we could always xor the result with an ordinary SHA256 hash.
sr. member
Activity: 404
Merit: 359
in bitcoin we trust
I can see the attraction of CPUs however if you optimized for the CPU to the detriment of the GPU, that leaks possible advantages to ASICs over GPUs.  I think about all you can say that a CPU has is faster single core performance (irrelevant for mining: more compute bandwidth is more important than per core speed).  And main memory readable over a narrow bus (DDR3 64-bit vs DDR5 over 384-bit).

Another thing CPU cores have going for them over GPU is they are independent.  AMD GPU cores are in SIMD groups, eg 7970 has 2048 cores, but groups of 16 of them have to execute the same instruction each clock on different data, that means if you force them to do dynamic work, there are only really 128 cores that can do independent dynamic work.  And the cores are about 32x slower than a CPU core.  So then a four core CPU matches a GPU for dynamic workloads.

However again that is not a good ASIC-hard direction because the SIMD nature of AMD GPUs is overcomeable eg http://www.adapteva.com with a MIMD (ie no SIMD restrictions) 28nm 64 core risc CPU and plans for 1024 even 4096 risc cores per chip.  And they are low energy too.

Adam
sr. member
Activity: 404
Merit: 359
in bitcoin we trust
I think one could make a mining function which was fairly hard to gain an advantage with using ASICs.  But I do think you have to target GPUs because a GPU is basically a better CPU.

GPUs wouldn't necessarily bad, because they are consumer hardware with a primary purpose other than mining, and therefore impossible to suppress. But it would be nice if CPUs were still competitive, because the hardcore gamer community too is a very unrepresentative cross-section of society. But because there are so many CPUs, they might collectively still wield considerable influence even if there is a factor of ten performance difference.

I can see the attraction of CPUs however if you optimized for the CPU to the detriment of the GPU, that leaks possible advantages to ASICs over GPUs.  I think about all you can say that a CPU has is faster single core performance (irrelevant for mining: more compute bandwidth is more important than per core speed).  And main memory readable over a narrow bus (DDR3 64-bit vs DDR5 over 384-bit).  GDDR5 in an AMD 7970 is quad-pumped at 1500MT with 384-bit data bus where as DDR3 is dual-pumped a 1333MT or 1600MT etc similar speed.  So if you are reading random chunks in CPU friendly 64-bit chunks the GPU ram is still 2x speed (quad vs dual pump) even though its 6x bus-width advantage is wasted for random access.  However i7s have two memory channels so they match the GPU for 64-bit reads.  Some CPUs eg 3930k have quad channels so they can do 2x that and beat a GPU.  i7 3930k rated at 51.2GB/sec memory, regular i7 at 25.6GB/sec bandwidth, amd 7970 rated at 288GB/sec but in terms of ability to read 64-bit chunks the 7970 would do 6x less = 48GB/sec.

However the peak figures are in sequential read, DDR3 and GDDR5 are both slower with random reads.  And thrashing RAM with reads for data intentionally too big to fit in L3 is going to bog your computer down for normal use.

Adam
newbie
Activity: 42
Merit: 0
Wow very interesting stuff in this thread, thanks!
sr. member
Activity: 404
Merit: 359
in bitcoin we trust
Just found this interesting video of the Bitcoin conference:

Dan Kaminsky Predicts The End Of The Current Proof-Of-Work Function

Actual ASIC-miners will not allow this change. And the have more votes (=hashpower) than anybody.

It's enough for them to reject the blocks with new PoW algorithm.

I am not sure the ASICs actually have any protocol choice power.  If bitcoin main developer branch for some reason decided to phase in a new mining algorithm, the choice is actually the users.  If the users agree, they will keep on the main branch and accept the algorithm phase in.  If they dont someone forks the code, and the users migrate over to a new fork.

If there was a code fork like that where both forks accepted the existing coins created up to the fork date as valid, that might be kind of strange sort of like an alt-coin that accepts bticoins up to a hard-fork point in time.

Adam
sr. member
Activity: 252
Merit: 250
Just found this interesting video of the Bitcoin conference:

Dan Kaminsky Predicts The End Of The Current Proof-Of-Work Function

Actual ASIC-miners will not allow this change. And the have more votes (=hashpower) than anybody.

It's enough for them to reject the blocks with new PoW algorithm.
hero member
Activity: 714
Merit: 500
Martijn Meijering
I think one could make a mining function which was fairly hard to gain an advantage with using ASICs.  But I do think you have to target GPUs because a GPU is basically a better CPU.

GPUs wouldn't necessarily bad, because they are consumer hardware with a primary purpose other than mining, and therefore impossible to suppress. But it would be nice if CPUs were still competitive, because the hardcore gamer community too is a very unrepresentative cross-section of society. But because there are so many CPUs, they might collectively still wield considerable influence even if there is a factor of ten performance difference.
hero member
Activity: 714
Merit: 500
Martijn Meijering
sr. member
Activity: 404
Merit: 359
in bitcoin we trust
Come to think of it, boolean satisfiability isn't such a good choice after all. Hmm. Maybe NP-complete isn't such a good criterion.

I think one could make a mining function which was fairly hard to gain an advantage with using ASICs.  But I do think you have to target GPUs because a GPU is basically a better CPU.  The CPU has a lot of resources dedicated to optimizing the single thread execution speed (eg super scalarity, out-of-order execution etc).  Alternatively GPUs dont have that.  A 7970 has basically a 2048 RISC cores.  So I think you want to optimize for the characteristics of the GPU.  Memory line size, cache architecture, instruction set.  Make all of those things work hard, and dynamically, but in proportion to the resources the GPU has.  eg some integer instructions, some FP instructions, some memory.

Then a would be ASIC miner has to make a better GPU.  AMD is putting quite a lot of resources into that.

Also I think we could have automatically balanced algorithm mining, including mining parameters.  So the idea is anyone can introduce a new algorithm or new set of algorithm parameters.  Presumably with some public review process so that there is no trapdoor known only to the introducer.  Then each algorithm has a floating separate difficulty set by the network.  The difficulty inflation is set so that the algorithm which appears last susceptible empirically to inflation is he inflation target.  Other algorithms have their difficulty adjusted so that their inflation matches the minimum inflation algorithm.  So eg if SHA256 hashcash mining has a big batch of new fast ASICs come online, to the extent that difficulty gets much harder quickly, the difficulty is increased faster yet so that the proportion of coins producible with SHA256 mining falls and the other mining functions rise.

Adam
hero member
Activity: 714
Merit: 500
Martijn Meijering
Come to think of it, boolean satisfiability isn't such a good choice after all. Hmm. Maybe NP-complete isn't such a good criterion.
full member
Activity: 145
Merit: 100

I am anti-fork as bad for mindshare, confidence and dilutive of bitcoin and crypto currency value aggregate.


Awwww man  Cry

Seriously though, with all due respect (and with admittance of my conflict of interest here) - alt-coins (or rather, truly innovative alt-coins as opposed to one-two tweak clones) are useful.

They prevent monoculture.

If anything, we need more altcoins pursuing different niches (I feel that there are market niches which BTC, contrary to popular belief, fills imperfectly, allowing for an alt to take that niche without affecting mainstream btc adoption - but that's a long and boring story)

1. Alt-coins
2. Crypto-currency exchange
3. Currency basket

A user can hold bitcoin, litecoin and whatever-coin using any ratio that they deem fit.
hero member
Activity: 714
Merit: 500
Martijn Meijering
Maybe the boolean satisfiability problem would be a better choice though.
Pages:
Jump to: