Author

Topic: Emergent block time through twist on consensus mechanism [newbie idea] (Read 49 times)

?
Activity: -
Merit: -
Greetings from the newbie,

I have no background in CS but have relatively recently become interested in cryptocurrencies and blockchains from a technical point of view. As you do when you’re a beginner, you come up with all sorts of new ideas from which experience will dissuade you. I have no experience though, so here we are.

So, here’s some thoughts on a potential new consensus mechanism.

First, lets generalize the standard PoW mechanism: suppose that, instead of adopting a common difficulty setting for the entire network, you let every miner solve as complex a puzzle as they wish, as measured by number of leading zeroes in the block’s hash. The rules are modified such as, instead of the longer blockchain always being adopted, now all nodes must go for the blockchain with the largest count of zeroes. The “standard” PoW mechanism is then just a special case of this general setup, with the number of zeroes fixed for all at any one time.

To make it a bit less cacophonic, set the actual selection metric to be block length + count of zeroes over the 100 most recent blocks.
All right, what would happen?

When would a miner decide to stop hashing? For the same compute, the longer he hashes, the higher the chances of his block being included in the winning chain, but also the higher the chances of someone else mining two blocks in the meantime (these are still sequential, of course).

When deciding when to stop hashing, a miner would consider the volume of transactions incoming, the chance of their block propagating and probably other considerations that escape me. So, what the network itself should consider when determining the otherwise fixed block time.

I say that what such a setup would do is lead to the emergence of a responsive block time that varies to meet the needs of the network at that time.

Further, winning the block now would not only just be a function of compute, but also of strategy / knowledge of the market.

Yes, finality would be worse per block, but block time would likely be shorter on average, so finality in terms of time may well be better.

For what its worth, energy use may also improve a bit.

Orphan blocks would explode though, no question.

Anyway, now on to step 2: substitute PoW for a Verifiable Delay Function (a la Solana), where instead of counting zeroes we count cycles. A VDF is far less sensitive to one’s rig, and will require a stableish time to compute. You know that each of those cycles that people are gambling on have taken comparable time.

Now it still makes sense to put your best machine in the game, but there’s no point to linking machines in parallel since the VDF does not lend itself to parallelization. Larger miners would likely just run many independent blocks in parallel to try to have one of them win. I still reckon that the sensitivity of the chance of winning the next block to compute would fall a bit closer to a linear (fair) relationship. Basically, the chance of success would be proportional to the count of one’s machines above a certain capacity threshold (you still need to have the power to check the block, can’t put your phone on it).

Anyway, I think that this setup would make a 51% attack more challenging, between requiring the attacker actually spend half the network’s total time to have a chance of success, and still being far from guaranteed, given the (likely) probabilistic link between number of machines and chance of success. Even if I control 51% of the machines in the system, the chance of having two my block as part of the final chain would still be just 51%, I think.

Again, energy use may fall a bit again, but meh.

So that’s it. Tear it to pieces.

Regards,


Jump to: