Author

Topic: Idea for ASiC resistance (Read 1477 times)

donator
Activity: 1736
Merit: 1014
Let's talk governance, lipstick, and pigs.
November 21, 2014, 01:24:59 AM
#18
I saw it as Intel and AMD vs Motorola and IBM. Even Apple switched to Intel eventually because CISC had heat problems with increased cycles. Non-ASIC SHA mining generates more heat for the same reason. Eventually someone would build a single chip to increase efficiency and we'd be back to ASIC mining.

x86 is the CISC architecture, powerPC/DEC is the RISC architecture. CISC has become king.
Ok, I see this is a more complex issue due to instruction set vs core and that's why it's not a good analogy.
Ix
full member
Activity: 218
Merit: 128
November 21, 2014, 12:49:16 AM
#17
I saw it as Intel and AMD vs Motorola and IBM. Even Apple switched to Intel eventually because CISC had heat problems with increased cycles. Non-ASIC SHA mining generates more heat for the same reason. Eventually someone would build a single chip to increase efficiency and we'd be back to ASIC mining.

x86 is the CISC architecture, powerPC/DEC is the RISC architecture. CISC has become king.
donator
Activity: 1736
Merit: 1014
Let's talk governance, lipstick, and pigs.
November 21, 2014, 12:00:22 AM
#16
In the end the consensus is likely to be a simplified processor that trades functionality for speed. Speed and an open architecture wins.

Yet the opposite occurred in the RISC vs CISC debate, odd that you should use that example. Grin Although that had a lot more to do with the installed base rather than the architecture itself.
I saw it as Intel and AMD vs Motorola and IBM. Even Apple switched to Intel eventually because CISC had heat problems with increased cycles. Non-ASIC SHA mining generates more heat for the same reason. Eventually someone would build a single chip to increase efficiency and we'd be back to ASIC mining.
legendary
Activity: 1232
Merit: 1011
Monero Evangelist
November 20, 2014, 11:52:36 PM
#15
Wouldnt the proposed solution (new hashing algo every block) increase the probability of chain forks, due to networking issuesses/orphaned blocks?
Ix
full member
Activity: 218
Merit: 128
November 20, 2014, 11:37:14 PM
#14
In the end the consensus is likely to be a simplified processor that trades functionality for speed. Speed and an open architecture wins.

Yet the opposite occurred in the RISC vs CISC debate, odd that you should use that example. Grin Although that had a lot more to do with the installed base rather than the architecture itself.
donator
Activity: 1736
Merit: 1014
Let's talk governance, lipstick, and pigs.
November 20, 2014, 12:28:31 PM
#13
The ASIC debate reminds me of the CISC vs. RISC CPUs. It basically depends on the external architectures built around them and whether or not they are open or closed. In the end the consensus is likely to be a simplified processor that trades functionality for speed. Speed and an open architecture wins.
staff
Activity: 4284
Merit: 8808
November 20, 2014, 12:21:17 PM
#12
Suppose a coin switches its hashing algorithm each time a new block is found. Also suppose that new algorithm itself is randomly generated, and the previous block contains the instructions on how to perform it.
Bitcoin is a cryptosystem. Like other cryptosystems the details matter _greatly_.  You say "random" ... well what the heck does that mean?  Does the randomness fairy bless you with a magical number by striking your brain with a cosmic ray? Everyone needs to agree on what the state is, so presumably not.

Perhaps we should have a tradition here where anything unspecified in a proposal can be filled in whatever way the person responding wants, instead of giving them the responsibility of reading the tea leaves and trying to extract (or prove the non-existence of) a single secure proposal out of the infinite class of proposals your underspecified message invoked. With that kind of tradition I could just analyze your proposal assuming random means that You, Muis, "randomly" pick the hash functions, and broadcast signed messages... allowing you to make it easy for yourself to mine, and also allowing you to partition the network by announcing conflicting hash functions. Tongue

Perhaps not?

I'd guess your post really means not-at-all-randomly but based on the prior block hash, since thats the most common thing that people commonly mistake as "random" in these sorts of systems.  If so, this would mean that an attacker could grind his current block to make sure to come up with an algorithim which has weaknesses he knows how to exploit or is especially fast on his hardware. This could be a pretty extreme vulnerability..  And, of course, you can't block hardware ... or else a regular computer couldn't verify it either as they are hardware too after all, all you could hope to do is limit the domain for hardware optimization, but you haven't suggested anything specific about your parameterization which makes it clear that it would actually achieve that. E.g. people would just make hardware specialized to the space of functions the 'random' generation can produce (or a subset, and grind blocks to get it into the state they can support).  You could perhaps try to structure the circuit so that the 'randomness' can't introduce strong optimizations, though that would seem to be at odds with it making hardware optimizations of the base design hard.

Then after that, how do you propose to deal with the incomparability of the computational complexity of different functions?  You're given two chains and need to decide which has the most work... one has more hash-function runs, but maybe it's less worth because the functions were easier to execute.

In any case, as others have pointed out... it's far from obvious that a meaningful improvement here is possible, that 'ASIC's are harmful, or that it's possible to resist improved hardware. Keep in mind that hardware which is only a few times more power efficient will eventually push everyone else out, since mining is in near perfect competition... and the increased startup cost for increasingly sophisticated hardware creates its own centralization risks.

I suggest mediating on https://download.wpsoftware.net/bitcoin/asic-faq.pdf some more. Smiley
newbie
Activity: 49
Merit: 0
November 20, 2014, 11:47:58 AM
#11
ASICs are good.

People think ASICs are bad because they were not or chose not to be early adopters.
member
Activity: 101
Merit: 10
November 20, 2014, 11:27:10 AM
#10
I think Charles Hoskinson was working on a similar project dubbed "Revolver" when he was still working on the Ethereum Project. The problem is really that the function could end up being unminable, producing too many hash collisions, or having too predictable mutations.
legendary
Activity: 1400
Merit: 1013
November 20, 2014, 10:54:23 AM
#9
If it were possible to achieve ASIC resistance, it would hurt the security model for Bitcoin rather than helping it.
legendary
Activity: 2128
Merit: 1073
November 20, 2014, 10:45:34 AM
#8
But even if that analysis could be automated: it becomes PoW on itself.
Yeah, but the "fitness testing" would take way more than 10 minutes. http://en.wikipedia.org/wiki/John_Koza had run this type of experiments (not in cryptocurrency though).

By itself Bitcoin's "chain with the highest sum of proof-of-works" will tend to favor fastest functions, not the best hash functions. Somebody with heavily optimizing compiler, SAT-solver or similar optimization tool could then exploit such coins that don't extensively test the evolved hash functions.
hero member
Activity: 543
Merit: 501
November 20, 2014, 10:28:26 AM
#7
Block withholding attacks are generally ineffective on Bitcoin, because withholding a block is pretty expensive unless you have a majority of the hashing power.

A pool however may actually have a majority of hashing power under a certain special subset of algorithms, which means that block withholding attacks in a mutating algorithm blockchain could be substantially more effective. If for 1% of the blocks you control 95% of the hashing power, you can really leverage that and try to grind blocks.

Additionally, you'd need some way to compare the amount of work between chains. If chain A has 500 blocks using 500 different algorithms, how does that compare to chain B which has 1000 blocks using 1000 different algorithms? Is there a consistent way to measure the amount of work in each chain?
member
Activity: 114
Merit: 12
November 20, 2014, 10:18:20 AM
#6


If the choice for the algorithm depends on the hash of the previous block, I cannot see how anyone could steer the next function?

A large pool/ASIC group could grind blocks until they get blocks that steer it to something they like.  

Pool is good at hashing alg X. It's at Y, which is close to X, giving an advantage. They decide they will only release blocks if it moves it closer to X. Once X is achieved they own the dominant hashing power by manufacturing fiat.

But also, as mentioned before, you're going to get naturally bad PoW algorithms that don't scale correctly, or take too long to verify, etc. Even without "malicious" actors you're going to possibly get crazy behavior.
newbie
Activity: 43
Merit: 0
November 20, 2014, 10:05:00 AM
#5

The problem with true random changes to the algorithm is that most of the resultant mutated algorithms have some deadly fault that makes it unsuitable as a proof of work.


I was thinking about that too, but maybe it's not a big deal that sometimes there will exist 'shortcuts'?

Most weaknesses in algorithms are discovered by humans, and if a human can find a flaw within the 10-minute blocktime, that's no problem at all, because human PoW beats every other distribution scheme, so they deserve the block-reward.

But even if that analysis could be automated: it becomes PoW on itself. Miners still have to choose between how much CPU power (and time) they spend on analyzing it first, as all the 'dumb' miners already started mining that block. Maybe that would cause some kind of race between the two, and as long as that race lasts 10 minutes on average, all will be fine.


Also it may lead to people trying to steer the next function into functions it can do better than competition.

If the choice for the algorithm depends on the hash of the previous block, I cannot see how anyone could steer the next function?
legendary
Activity: 2128
Merit: 1073
November 20, 2014, 09:26:56 AM
#4
new algorithm itself is randomly generated
The problem with true random changes to the algorithm is that most of the resultant mutated algorithms have some deadly fault that makes it unsuitable as a proof of work. You'll have to incorporate the ideas of http://en.wikipedia.org/wiki/Genetic_programming and constantly monitor and test the fitness of the mutated algorithms to being a properly working hash function. Otherwise your algorithm evolution will turn into cancer that produces exploitable hash functions.
member
Activity: 114
Merit: 12
November 20, 2014, 09:24:02 AM
#3
There have been similar proposals before to do this, but it isn't immediately clear it's wanted, or that it provably will stop ASICs. 

Also it may lead to people trying to steer the next function into functions it can do better than competition.

legendary
Activity: 1260
Merit: 1019
November 20, 2014, 09:22:00 AM
#2
Quote
Would that coin be ASIC-proof?
Why should coin be ASIC-resisted?
newbie
Activity: 43
Merit: 0
November 20, 2014, 08:36:26 AM
#1
Suppose a coin switches its hashing algorithm each time a new block is found. Also suppose that new algorithm itself is randomly generated, and the previous block contains the instructions on how to perform it.

Would that coin be ASIC-proof?

I understand that if there are  X different algorithms, miners can still buy X different types of ASICs (one for each algorithm). But if X is large, or even unlimited, it seems completely infeasible to have specialized equipment?
Jump to: