Author

Topic: If you just SUM the inverse-hashes of valid blocks ..? (Read 213 times)

legendary
Activity: 1456
Merit: 1176
Always remember the cause!
I think the idea is based on a misconception that rarer values are somehow worth more. I don't believe that is true because the amount of work expended to generate a hash is the same, regardless of the hash.

I believe that the result of such a system would simply make the actual difficulty higher than the stated difficulty, but by a varying amount. I can't think any benefit of adding a random factor to the difficulty that would outweigh the problems.
Agree. No matter what the outcome is, you are processing the target difficulty and you hit proportional to your hashpower in long run, no hashpower is lost as OP suggests.
hero member
Activity: 718
Merit: 545
P2Pool uses a system a little bit like this.

In P2Pool, you search for a block every 10 secs. You publish at that difficulty level, on the P2Pool network. BUT if you find a block that is 60x harder (one every 10 minutes), enough to be a valid Bitcoin block, you then publish that on the mainnet. The cumulative POW is preserved  - in a smaller amount of blocks.

In this scenario using multi-difficulty blocks has a great use case. 

I am trying to leverage this.

In Bitcoin, you search for a block every 10 minutes. You publish at that difficulty level, on the mainnet. BUT if you find a block that is 60x harder (one every 10 hours), enough to be a valid Super Block (let's say), then you can publish that on a Super Block Network, where you only find one block every 60 normal Bitcoin blocks. The cumulative POW is preserved - in a smaller amount of blocks.

The point is that since this works for dual-difficulty blocks, what happens if you take this to the extreme, and simply use the inverse-hash as the block weight ?

It STILL seems to work (chain with most POW wins), but the fluctuations are far bigger..

I'm trying to see if those fluctuations can be better controlled.
legendary
Activity: 4522
Merit: 3426
I think the idea is based on a misconception that rarer values are somehow worth more. I don't believe that is true because the amount of work expended to generate a hash is the same, regardless of the hash.

I believe that the result of such a system would simply make the actual difficulty higher than the stated difficulty, but by a varying amount. I can't think any benefit of adding a random factor to the difficulty that would outweigh the problems.
legendary
Activity: 3430
Merit: 3080
Oh I get it now, tromp was saying "if that were true", explaining his edit
legendary
Activity: 3122
Merit: 2178
Playgram - The Telegram Casino
Sometimes miners mine blocks with a hash quite a lot lower than the target value

And that is exactly the problem, since such unexpectedly low hashes will then cause multiple blocks to get orphaned. There will be a permanent state of uncertainty about finalization of recent transactions.
6 confirmations will no longer be particularly safe.

This has no basis in fact.

All blocks must have a hash lower than the threshold, there is no logic in Bitcoin block validation that behaves in any way differently depending on how low the hash value is of a new block. Either a block is the lowest hash value for the next block solution, or it's not. "Unexpectedly low" is therefore completely meaningless. The only logic that exists in Bitcoin block validation is "lowest", not "how low".

[...]

I'm not sure you read tromp's post correctly (well, or maybe I didn't), but the way I understand it is this:

1) Bitcoin follows the chain of the longest cumulative work based on the difficulty at which each block was mined

2) spartacus rex suggests calculating the cumulative work based on not the difficulty (which is the same for each block within a given difficulty period) but rather using the amount of work that is put into a block beyond this difficulty (ie. how far the hash is beyond the difficulty threshold)

3) tromp then points out that this would make the logic for following the longest cumulative work less stable, as a single "hard" block (ie. far beyond the difficulty threshold) would supersede multiple "easy" blocks (ie. within, but close to the difficulty threshold). This would lead to both (a) more orphans and (b) some confirmations being more worth than others (ie. 6 confirmations by "easy" blocks would be nullified by a single "hard" block), making it hard to reliably asses transaction finality.

While I wouldn't go as far as claiming this to make selfish mining easier, pointing out that block equality within a given difficulty period is important for a stable network and thus network security seems to me both correct and important. Feel free to correct me though.
full member
Activity: 351
Merit: 134
Let's remove this "problem" you've identified lol, only the longest chain wins. We then have a "new" problem; how to resolve chain forks when 2 blocks are found before either block has 100% acceptance, and so different parts of the network accept either block. This problem was solved in Bitcoin back in 2009 or 2010, and now you're saying that the solution is causing the problem. Those who merited your post should ask for the merit points to be returned.

I don't think you've read this thread correctly. No one is suggesting we remove the cumulative difficulty rule; the OP was suggesting that hashes with a lower numerical value than target are on average harder to mine than hashes at the target value, therefore there might be some merit in using this to order blocks.

@tromp rightly pointed out that this would cause wild reorgs when a miner gets lucky with a low target, thereby increasing double spend risk.
legendary
Activity: 3430
Merit: 3080
Sometimes miners mine blocks with a hash quite a lot lower than the target value

And that is exactly the problem, since such unexpectedly low hashes will then cause multiple blocks to get orphaned. There will be a permanent state of uncertainty about finalization of recent transactions.
6 confirmations will no longer be particularly safe.

This has no basis in fact.

All blocks must have a hash lower than the threshold, there is no logic in Bitcoin block validation that behaves in any way differently depending on how low the hash value is of a new block. Either a block is the lowest hash value for the next block solution, or it's not. "Unexpectedly low" is therefore completely meaningless. The only logic that exists in Bitcoin block validation is "lowest", not "how low".


Let's remove this "problem" you've identified lol, only the longest chain wins. We then have a "new" problem; how to resolve chain forks when 2 blocks are found before either block has 100% acceptance, and so different parts of the network accept either block. This problem was solved in Bitcoin back in 2009 or 2010, and now you're saying that the solution is causing the problem. Those who merited your post should ask for the merit points to be returned.


For the same reason, this will make selfish mining all the more effective.

But you were wrong, so it actually makes zero difference

Edit: I misinterpreted, sorry tromp
hero member
Activity: 718
Merit: 545
Sometimes miners mine blocks with a hash quite a lot lower than the target value

And that is exactly the problem, since such unexpectedly low hashes will then cause multiple blocks to get orphaned. There will be a permanent state of uncertainty about finalization of recent transactions.
6 confirmations will no longer be particularly safe.

For the same reason, this will make selfish mining all the more effective.

So the Longest Chain Rule, where one next block is as good as any other, is quite essential in stabilizing the transaction history.

Yes - this is the issue.. one lone hash can wipe out many normal / smaller hashes.

I have a system where you delay this. So initially a block is worth it's normal difficulty - as per usual.

But after 10,000 blocks.. beyond a re-org, you let it be worth what it's actually worth. This has some nice benefits (in my case - for pruning)
legendary
Activity: 990
Merit: 1108
Sometimes miners mine blocks with a hash quite a lot lower than the target value

And that is exactly the problem with this proposal, since such unexpectedly low hashes will then cause multiple blocks to get orphaned. There will be a permanent state of uncertainty about finalization of recent transactions. 6 confirmations will no longer be particularly safe.

For the same reason, this will make selfish mining all the more effective.

So the Longest Chain Rule, where one next block is as good as any other, is quite essential in stabilizing the transaction history.
full member
Activity: 351
Merit: 134
For anyone wondering what he's asking, here's some clarification:

He's talking about what would happen if you changed the LCR from sorting branches by cumulative difficulty, to sorting by lowest achieved hash vs target value. Sometimes miners mine blocks with a hash quite a lot lower than the target value, and on average these numerically lower hashes are harder to mine, so does it increase the security of chain to sort by them?
hero member
Activity: 718
Merit: 545
..I'm working on something.

The inverse hash is the MAX_HASH_VALUE - the hash value. So the lower the hash of the block, the more difficult the block, the more you add to the total.

I'm wondering what the sum of the inverse-hashes of the blocks, as opposed to the sum of the difficulty of the blocks, would give you ?

You would still need to find a nonce that satisfies the block difficulty, as usual, but after that the lower your hash, the more your block is worth.

The chain with the most hash-rate would on average still have the highest sum of inverse-hashes.

Any ideas what would happen?

( It just seems that some of the POW available is being left unused.. )
Jump to: