Pages:
Author

Topic: ASIC-resistant Proof of Work - page 3. (Read 5048 times)

full member
Activity: 126
Merit: 100
September 29, 2014, 05:07:47 PM
#22
Hmm... Ok. Darn. I think my random index jumping would work, but it does require that the verification algorithm has the same amount of memory. I need to go back to the drawing board I guess.

There are already PoWs that combine a large memory requirement with instant verifiability
(and where memory access dominates computation).

I'm an amateur when it comes to this, but isn't memory access also an important factor? For example, if the memory requirement is large and the memory access rate low, then thousands of computation cores could share the same memory in an ASIC. If on the other hand the memory access rate requirement is high, then a shared memory would become a bottleneck with too many cores on the same chip.
legendary
Activity: 988
Merit: 1108
September 29, 2014, 04:51:33 PM
#21
And the number of iterations must be low enough so that validation of blocks becomes fast enough.

Seems you never heard of asymmetric PoWs, where only proof attempts require a lot of memory,
but verification requires negligible...

But is it really a problem if validation requires an equal amount of memory?

It's a big problem in SPV clients, e.g. on mobile devices,
and also makes the network more susceptible to ddos-attacks with false proofs.

Hmm... Ok. Darn. I think my random index jumping would work, but it does require that the verification algorithm has the same amount of memory. I need to go back to the drawing board I guess.

There are already PoWs that combine a large memory requirement with instant verifiability
(and where memory access dominates computation).
full member
Activity: 126
Merit: 100
September 29, 2014, 04:48:08 PM
#20
And the number of iterations must be low enough so that validation of blocks becomes fast enough.

Seems you never heard of asymmetric PoWs, where only proof attempts require a lot of memory,
but verification requires negligible...

But is it really a problem if validation requires an equal amount of memory?

It's a big problem in SPV clients, e.g. on mobile devices,
and also makes the network more susceptible to ddos-attacks with false proofs.

Hmm... Ok. Darn. I think my random index jumping would work, but it does require that the verification algorithm has the same amount of memory. I need to go back to the drawing board I guess.
legendary
Activity: 988
Merit: 1108
September 29, 2014, 04:42:27 PM
#19
And the number of iterations must be low enough so that validation of blocks becomes fast enough.

Seems you never heard of asymmetric PoWs, where only proof attempts require a lot of memory,
but verification requires negligible...

But is it really a problem if validation requires an equal amount of memory?

It's a big problem in SPV clients, e.g. on mobile devices,
and also makes the network more susceptible to ddos-attacks with false proofs.
full member
Activity: 126
Merit: 100
September 29, 2014, 04:34:28 PM
#18
And the number of iterations must be low enough so that validation of blocks becomes fast enough.

Seems you never heard of asymmetric PoWs, where only proof attempts require a lot of memory,
but verification requires negligible...

But is it really a problem if validation requires an equal amount of memory?
legendary
Activity: 988
Merit: 1108
September 29, 2014, 04:25:53 PM
#17
And the number of iterations must be low enough so that validation of blocks becomes fast enough.

Seems you never heard of asymmetric PoWs, where only proof attempts require a lot of memory,
but verification requires negligible...
full member
Activity: 126
Merit: 100
September 29, 2014, 03:44:01 PM
#16
ASICs can calculate hash functions very fast but not practically have much memory. CPUs can have access to a lot of memory but have slow calculation of hash values.

The PoW algorithm here needs to demand separate memory for each hash function core in the ASICs. Otherwise lots of cores could share a single memory.

By requiring lots of memory accesses, a single shared memory in an ASIC will become a bottleneck.

This can be achieved by having a pre-calculated list of random numbers, large enough so that ASICs will have trouble allocating enough memory to each hash function core. And a random jump indexing in the list is done by taking the initial hash value and repeat hashing it a number of times. The number of iterations should be large enough so that memory access becomes a bottleneck for ASICs with many hash function cores. And the number of iterations must be low enough so that validation of blocks becomes fast enough.

EDIT: No, not hashing several times. The first index position is h MOD N, and the index jumping is done with XOR of (h / N) MOD N. Which means taking the lowest part of the hash value as the first index position and then using another part of the hash value for the random index jumping. Then a huge number of random jumps can be done with very little computation yet with loads of memory access. The list of random numbers can remain fixed. And the two parts of the hash value prevent shortcuts. (Unless I have overlooked something. I'm too lazy at the moment to think this through carefully. Cheesy)
legendary
Activity: 988
Merit: 1108
September 29, 2014, 02:58:22 PM
#15
Use the search button..  a lot of people have tried to do this in various ways.. All of them have failed.

They failed because they were computation dominated.

For a PoW that spends a minority of runtime on computation,
and a majority on random memory access latency, commodity DRAM could be the ASIC.
full member
Activity: 126
Merit: 100
September 29, 2014, 02:47:07 PM
#14
Use the search button..  a lot of people have tried to do this in various ways.. All of them have failed.

Every time it goes CPU -> GPU -> FPGA -> ASIC

Yes, that's the trend. Ethereum has a (new) PoW algorithm that could be tricky to do ASICs for but I read that there are other problems with the algorithm. ASICs can already today do scrypt: https://zeusminer.com/product/zeusminer-volcano-300mhs1000wscrypt-asic-miner/

It's fun to think of new solutions anyway. My proposal needs to be modified to a fixed list of random values in order for verification to be fast, plus the index jumps need to include the initial hash in every jump, or else a double-lookup table could shortcut the iterations.
legendary
Activity: 1484
Merit: 1026
In Cryptocoins I Trust
September 29, 2014, 02:07:32 PM
#13
Use the search button..  a lot of people have tried to do this in various ways.. All of them have failed.

Every time it goes CPU -> GPU -> FPGA -> ASIC
full member
Activity: 126
Merit: 100
September 29, 2014, 01:37:12 PM
#12
Another solution for RHA is to use a fixed pre-calculated list of values. And the first index position is h MOD N, where h is the original hash value from the message (Bitcoin block). The rest of the algorithm is the same as before except the RHA value is set to yM XOR h.

Now verification becomes fast and lots of memory is still needed for the algorithm. Cool
full member
Activity: 126
Merit: 100
September 29, 2014, 01:22:32 PM
#11
Ouch, SHA-256 is pretty slow on a CPU. Crypto++ 5.6.0 Benchmarks -- http://www.cryptopp.com/benchmarks.html

Way too slow to be used in the RHA I proposed. Instead a linear congruential generator (LCG) can be used. That's a super fast calculation:



This might be a problem: "They also must not be used for cryptographic applications" -- http://en.wikipedia.org/wiki/Linear_congruential_generator#Advantages_and_disadvantages_of_LCGs

On the other hand, it's not about secure cryptography here. A 256-bit LCG can be used for generating the list of values, except the first value in the list which is an ordinary hash value (such as from SHA-256). With an LCG implementation the verification can be done within a second for very large lists. The problem may be that a shortcut is possible in the calculations when using an LCG.

full member
Activity: 126
Merit: 100
September 29, 2014, 12:49:52 PM
#10
Remember that the block hash must be quickly verifiable. This means that calculating single hash should be as fast as possible on CPU. Otherwise only miners would be able to verify blocks.

With the RHA algorithm the miners would still need to do zillions of hash calculations while verification is only one RHA calculation. Sure, RHA is hugely more computationally demanding than say plain SHA-256, but verification can be done within a second with a CPU I guess.

Or do you mean ordinary Bitcoin nodes need to do many block verifications per second? That could be problematic with RHA.
member
Activity: 64
Merit: 10
September 29, 2014, 12:11:43 PM
#9
Remember that the block hash must be quickly verifiable. This means that calculating single hash should be as fast as possible on CPU. Otherwise only miners would be able to verify blocks.
full member
Activity: 126
Merit: 100
September 29, 2014, 08:29:49 AM
#8
I have learned that proof of work (PoW) easy to do with ASICs may actually be a good choice. Anyway, I came to think of a PoW algorithm that requires lots of memory which makes it difficult to implement in ASICs:

Random Hash Algorithm (RHA)

The algorithm starts with generating a huge list of N hash values where each new value hi is the hash of the previous value in the list. Then the following calculation is iterated M times: yi = hj, where j = yi-1 MOD N and y0 = hN-1. If a hash value already has been used, then j = (yi-1 + n) MOD N, n = 1,2,3... until an unused hash value is found. And h0 is the ordinary hash value (digest) for the message (in the case of Bitcoin the message is the part of the block to do PoW for).

The resulting hash value RHA = yM. And M < N (to allow enough iterations).

What this means is that the last hash value in the list is used to generate a random index in the list from which the next hash value is taken, and this is iterated M times. Since it's impossible to know beforehand how the random index jumps will be, there are M random positions that have to be examined one by one.

Without a large memory, the list of hash values must be recalculated M times at least partially for each index jump. With a large memory the calculation is simply iterations of hash value lookups in the list. Using a cryptographic hash function, such as SHA-256, makes the direct memory lookup approach optimal. So the algorithm is non-optimizable in the theoretical sense, and the optimal implementation requires large amounts of random access memory.
full member
Activity: 126
Merit: 100
September 15, 2014, 10:02:25 AM
#7
https://download.wpsoftware.net/bitcoin/asic-faq.pdf

Questions 2.5 and 2.6 cover the obvious problems with this proposal, but almost the entire document is applicable.

I get this part:

"The proof-of-work must be optimization free; that is, there should not be any algorithmic speedups which would give an advantage over the standard algorithm. If a speedup exists and is found, there is strong motivation for the discoverer to use it secretly rather than publishing it, gaining an unfair advantage. This contributes to centralization."

That's a good point. There is a possibility that the integer factorization I propose, or even ordinary prime factorization, could lead to algorithms being developed in secret. Not good. Maybe if I changed it to only use ordinary prime factorization it would ok, since even though probably not optimization free, the problem of prime factorization is well-known and there's tons of public researched about it available.

This other part I'm more unsure of:

"It is also important that miners with a lot of computational power should not achieve a disproportionate benefit."

Perhaps with integer factorization some miner could become dominant which would open up a risk of 51% attacks.
full member
Activity: 179
Merit: 151
-
September 15, 2014, 08:35:05 AM
#6
https://download.wpsoftware.net/bitcoin/asic-faq.pdf

Questions 2.5 and 2.6 cover the obvious problems with this proposal, but almost the entire document is applicable.
full member
Activity: 126
Merit: 100
September 15, 2014, 08:06:27 AM
#5
Quote
for a problem like factorization of integers an ASIC wouldn't be that much faster than a CPU

Why do you think so? The fact is that there are no such ASICs today.
But it will be not a big problem to create such device if it would be economically reasonable.

I was thinking that integer factorization would be difficult to compute in parallel and that the clock speed of ASICs is lower than for CPUs.
legendary
Activity: 1260
Merit: 1019
September 15, 2014, 07:54:14 AM
#4
Quote
for a problem like factorization of integers an ASIC wouldn't be that much faster than a CPU

Why do you think so? The fact is that there are no such ASICs today.
But it will be not a big problem to create such device if it would be economically reasonable.
full member
Activity: 126
Merit: 100
September 15, 2014, 07:40:27 AM
#3
Quote
Today ASICs can easily calculate SHA-256 for bitcoin mining. As an alternative proof of work (PoW) the miners can be given the problem of solving integer factorization. This type of PoW can perhaps make CPUs more competitive versus ASICs.

https://en.wikipedia.org/wiki/Application-specific_integrated_circuit
ASIC is an integrated circuit (IC) customized for a particular use, rather than intended for general-purpose use
There are no such things as "ASIC-resistant algorithms"

Either the community uses your fork and some day somebody creates ASIC for it instead of general-purpose computer
Or your cryptocurrency is dead.


Yes, everything a CPU can do eventually will be possible to do with ASICs, but my idea is that for a problem like factorization of integers an ASIC wouldn't be that much faster than a CPU. And parallel computing can be limited by restricting how a Bitcoin block can be put together.
Pages:
Jump to: