Author

Topic: Can't we avoid reorgs once and for all? (Read 260 times)

legendary
Activity: 2268
Merit: 18748
January 03, 2023, 03:17:48 PM
#32
We are still working at finding a block at H+1, but there has likely been more energy used than needed.
On average, I don't think there has.

The difficulty is always the same (for this difficulty period). The difficulty does not change based on the previous block's hash. It does not matter if every miner is building on top of the same block (as usually happens), but it also does not matter if literally every miner in the world was trying to build on top of a block unique to them. The previous block hash makes absolutely no difference to the difficulty. It will (on average) require the exact same number of hashes and the exact same amount of energy to find the next block, regardless of the presence of a chain split.

Now when the next block is found, you could say that all the energy spent mining on top of the now stale block was not needed. But you could equally say that all the energy spent mining by the pools which did not find the next block was not needed.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
January 03, 2023, 02:53:30 PM
#31
That exactly means it isn't waste
Some people got paid, while others didn't, while both paid for a cost. The former are miners. The latter is the pool administrator. For the former it wasn't waste, but it was for the latter.

The difficulty between both chains is the same, not different.
It's the same value, but not the same in the sense that the chains are different. Anyway, that's just a play on words. We both understand the same.

2) If there are multiple new blocks found nearly simultaneously and both are being mined upon, are the attempts on top of the losing block wasted?

2) should be answered in the negative, since even with two tips at height h (and thus an undetermine height h winner), all the hashpower is still working on finding a block at height h+1,  and the expected time to find one is no longer than if there is a unique tip at height h.
We are still working at finding a block at H+1, but there has likely been more energy used than needed.
legendary
Activity: 2268
Merit: 18748
January 03, 2023, 02:23:53 PM
#30
That doesn't mean it isn't waste.
Waste from the point of view of a mining pool is very different to waste from the point of view of the network. I'm sure mining pools probably do see stale blocks as waste. Every hash which does not earn a mining pool money is waste, regardless if it is because of a stale block or just the 99.999...% of hashes which are unsuccessful. But those hashes are not wasted from the network's point of view.

And as garlonicon pointed out, this is different again when considering miners instead of mining pools, since miners earn money for unsuccessful hashes too.

Stale blocks create a situation where we have two chains, therefore two potential difficulties.
The difficulty between both chains is the same, not different. And the combined hash rate across both chains for that difficulty will still mean the next block is found in 10 minutes (give or take the usual caveats).

tromp has put this very well I think. Despite the split at height H, all the hash rate on both sides of the split is working on the block at height H+1, just as it would be if there was no split.

But I would answer 1) in the positive...
In the situation you give in point 1), then yes, if a miner is attempting to mine on a chain which is not the main chain (as they would if they were not aware of the latest block), then that work is wasted. But in a chain split as being discussed here, we don't know which chain is the main chain yet, and so the work of both chains contributes to the security of the network.
copper member
Activity: 821
Merit: 1992
January 02, 2023, 05:43:15 PM
#29
Quote
That doesn't mean it isn't waste.
That exactly means it isn't waste, if you define "waste" as "not being paid for producing hashes". You have one lucky miner that submits a share, where the whole block is valid, and 6.25 BTC plus fees are collected. And you have a lot of miners, where each submitted shares below the target, stale, or those which were skipped, that way or another. But how rewards are truly splitted? This lucky miner is not getting 6.25 BTC plus fees. This miner just provided some reward, which is splitted between all miners, and this lucky miner could for example get only 0.01 BTC, because 6.24 BTC plus fees were splitted between other miners, despite none of them mined any accepted block.

Quote
The administrator of a pool decides to do various things to have the business running. An attractive policy is to pay miners for blocks, whether these are stale or not.
The whole reason why pools are needed at all, is that miners cannot collect 0.01 BTC on-chain for mining valid 7 BTC coinbase (6.25 BTC base + 0.75 BTC in fees), at 700 times lower difficulty. They have to use pools, because it is not something that is supported directly by the mining protocol.

So, if you think that another method of splitting rewards is better, then you can simply introduce that by forming your own pool. You can collect hashes, calculate chainwork for them, count the total chainwork in your pool, and then split rewards, based on that. There are many methods of rewarding miners. If you think that anything is "wasted", and your "chainwork-based method" is better, then call it Pay-Per-Chainwork (PPC? PPCW?), and form a pool.

As long as you have two (or more) blocks at the same height, and all of them are valid, you can locally pick another block, and work on it. The only thing you cannot change is forcing other nodes to think that your block is the only valid block, and make it a consensus rule. But when it comes to picking the block to work on, you can locally use any method you want, and stay in the same network. You will have the same chances of picking it wrong as other nodes, as long as you apply it only for a single block, and keep the actual chainwork rules for two or more blocks.
legendary
Activity: 990
Merit: 1108
January 02, 2023, 04:09:25 PM
#28
But looking forward, all the work which attempts to find a block, regardless of whether or not that block is accepted, is contributing to the difficulty of finding a block and therefore the security of the network.

I think everyone would agree with that, if we assume that the miner is honest.
I.e. they follow the longest chain rule in attempting to find a block that builds on the block with the most cumulative difficulty.

The remaining questions are:

1) If a new block has been found but not propagated to a specific miner yet, is that miner's attempt wasted?

2) If there are multiple new blocks found nearly simultaneously and both are being mined upon, are the attempts on top of the losing block wasted?

2) should be answered in the negative, since even with two tips at height h (and thus an undetermine height h winner), all the hashpower is still working on finding a block at height h+1,  and the expected time to find one is no longer than if there is a unique tip at height h.

But I would answer 1) in the positive...
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
January 02, 2023, 01:48:48 PM
#27
So, if you think that any work is "wasted", then note that in pools it is not. If miners are rewarded for "invalid" blocks (because of not meeting the difficulty), then I am pretty sure they are also rewarded for stale blocks.
That doesn't mean it isn't waste. The administrator of a pool decides to do various things to have the business running. An attractive policy is to pay miners for blocks, whether these are stale or not. But, stale blocks don't create money. The money come from business' budget. Stale blocks consist an expense for the pool that doesn't contribute in any way. That's the definition of waste.

But looking forward, all the work which attempts to find a block, regardless of whether or not that block is accepted, is contributing to the difficulty of finding a block and therefore the security of the network.
It isn't. Difficulty determines the security of one chain. Stale blocks create a situation where we have two chains, therefore two potential difficulties. The security of the network is essentially split in half for a moment, and the next miner decides which part will be sacrificed.

I thought it is apparent enough that block interval comes with an opportunity cost. That cost defines the security that would have been provided if there was a greater block interval. But in the end, it's a tradeoff. Lower security for faster confirmations. Maybe it is technically wrong to call stale blocks waste, because you actually gain something-- be it time or security accordingly.
legendary
Activity: 2268
Merit: 18748
January 02, 2023, 12:32:24 PM
#26
And energy spent on a chain that is not the longest one difficulty-wise, is wasted energy.
This is true if it is external hash power coming from elsewhere and attempting to 51% attack the main chain, for example. But (and I could well be wrong) I don't think it is true if it is hash power which is already mining the main chain that temporarily breaks off to attempt to mine a fork before rejoining the main chain.

In my previous example where the hash rate splits evenly in two, then it doesn't matter that both halves are attempting to build on top of a different block. The total hash rate hasn't changed, and so the next block will still arrive in 10 minutes on average. If half of the network's work was truly wasted, then the next block would take 20 minutes to arrive. But because we haven't pre-determined which fork will win, then the work of both forks is contributing to the security of the network.

All the work put in the invalid chain is wasted work, because the same work could have been used to provide security.
But it did provide security. If the work on the stale chain had found a successful hash first, then it would be the main chain. Just as if the work on any of the failed candidate blocks had found a successful hash first, then that candidate block would be on the main chain. If we look at a pool like BTC.com for example - they have an estimated 3% of the hashrate, but haven't found a block in almost 100 blocks. Does that mean all their work on their now invalid candidate blocks was wasted?

I think the confusion here is stemming from our frame of reference. If you look at the chain in retrospect, then all the work that didn't find a block can be called "wasted". But looking forward, all the work which attempts to find a block, regardless of whether or not that block is accepted, is contributing to the difficulty of finding a block and therefore the security of the network.
copper member
Activity: 821
Merit: 1992
January 02, 2023, 11:41:31 AM
#25
Quote
And since block interval is what determines that percentage, adjusting it would adjust the waste as well.
But it is already adjusted on other networks. If you want to make it shorter, then note that in pools, miners submit their shares every 30 seconds or something like that. Also, they submit "invalid" blocks, in that sense the Proof of Work does not meet the network difficulty, but they are rewarded with some fractions of the coinbase.

Another thing is making the block time longer, for example one hour. For compression and Initial Block Download, I also thought about two weeks per "package". And for sidechains, it is proposed to be three months per on-chain update.

So, if you think that any work is "wasted", then note that in pools it is not. If miners are rewarded for "invalid" blocks (because of not meeting the difficulty), then I am pretty sure they are also rewarded for stale blocks.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
January 02, 2023, 09:14:14 AM
#24
But the stale block from such a chain split is not invalid. It is perfectly valid, and could indeed have been the accepted block if a different miner had found the next block.
It's the old chain invalid. The block might be valid, but another miner successfully questioned that chain. Transactions in that block are not confirmed anymore; at least not until someone else confirms them. And energy spent on a chain that is not the longest one difficulty-wise, is wasted energy.

All the work built on top of it is still contributing to the security of the network.
All the work put in the invalid chain is wasted work, because the same work could have been used to provide security. Energy spent on such chain doesn't help anywhere.

It might be wasted in the sense that it was later decided that this hash power was mining on top of a stale block, but it is not wasted in the sense that at the time it wasn't contributing to the security of the network.
No, but we acknowledge that a small percentage of blocks are stale. Therefore, energy spent on that small percentage, is wasted. We might not know which blocks are stale, and thus we don't know at the time we mine them that we're wasting energy, but that's what we do.

And since block interval is what determines that percentage, adjusting it would adjust the waste as well.
legendary
Activity: 2268
Merit: 18748
January 02, 2023, 07:10:39 AM
#23
Building on top of a valid block, regardless of the invalid hashes, is the work. Building on top of an invalid block, while part of the process, is and should be considered waste.
This would be true only if the block being built upon is invalid. That would indeed be wasted work, as it is impossible for that work to find the next block and therefore it isn't contributing to the security of the network. But the stale block from such a chain split is not invalid. It is perfectly valid, and could indeed have been the accepted block if a different miner had found the next block. All the work built on top of it is still contributing to the security of the network.

It might be wasted in the sense that it was later decided that this hash power was mining on top of a stale block, but it is not wasted in the sense that at the time it wasn't contributing to the security of the network.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
January 02, 2023, 06:38:32 AM
#22
We do not consider those hashes to be wasted, so why would a hash that ends up being reorged out be considered wasted? It's no more or less wasteful than all of the hashes that weren't valid.
It's wasted in the sense that in a parallel system, wherein you had had 20 minutes interval and the exact same hash rate, you'd have lower chances to mark valid blocks as invalid.

Reorgs aren't bad because they waste energy. They're bad because they mean that low confirmation numbers cannot be relied upon.
Unreliable low confirmation numbers means less security. Less security with the same hash rate, when that very hash rate only contributes security-wise, is waste.

They all went to securing the network, regardless of the method by which they were unsuccessful.
Building on top of a valid block, regardless of the invalid hashes, is the work. Building on top of an invalid block, while part of the process, is and should be considered waste. Just as executing a 51% attack without the necessary hash rate is discouraging, because the attacker is likely to spend energy to beat the air-- ergo to waste energy.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
January 02, 2023, 05:50:23 AM
#21
The analogy with the dice is likely flawed, and I find it more complicated than needed. Mining works simply enough, over-simplifying might have opposite result.

Energy is only wasted if two or more miners are searching for the hash in the same search space.

E.g miner A and B are both calculating the SHA256d in the region xxxxxxxxxxxxx[10000000-20000000]xxxxxxx... where the x's are constant digits and might be different from each other, within one hash. But each of A and B'a hashes, this sequence of digits is always going to be same, effectively making A and B adjust only the digits in brackets.

When you have A and B searching in such a small range like that, there will of course be overlapping in the search space fairly often, which is why most mining software and pools try to give miners unique and sufficiently large ranges to work on.
legendary
Activity: 2268
Merit: 18748
January 02, 2023, 05:03:38 AM
#20
Therefore, one of the two winners' work (for rolling, whatever) is wasted.
I don't think it is.

The single winning hash itself is wasted, sure, because that block is eventually made stale* when the other winning block at the same height is built upon. But all the hashes which failed to find a winning block are not wasted.

As an example, consider that we have 200 EH/s at present. With 200 EH/s, the average block time is 10 minutes. Two miners simultaneously find a block at the same height and broadcast them. Half the network, 100 EH/s, attempts to build on Block A, and 100 EH/s attempts to build on Block B. Within 10 minutes on average, someone will successfully mine another block on top of one of those blocks, and the other block will be discarded.

Now, let's take the situation where half the network build on Block A, and the other half of the network do nothing. We now only have 100 EH/s instead of 200 EH/s, and it takes twice as long to find the next block. All the miners who were trying to mine on top of the now discarded Block B did not waste their work any more than any other miner who did not find the winning hash of the next block wasted their work. They all went to securing the network, regardless of the method by which they were unsuccessful.



*Although commonly used, orphan is the wrong term here.
legendary
Activity: 990
Merit: 1108
January 02, 2023, 02:32:10 AM
#19
When there are chain splits, every single hash computed on the losing side of the split is not wasted. They could have been valid blocks, and could have made that side the winning side.
They *are* valid blocks. But they got orphaned, and orphanage is waste. Ideally you want all valid blocks to be building on each other sequentially, preserving every valid block, with the cumulative diff fully reflecting the hashrate. With orphans, the cumulative diff underestimates the hashrate.

This is exactly why shortening the block interval too much, let's say much shorter than a minute, is bad in PoW. Because it increases the orphan rate, and thereby the waste.
staff
Activity: 3458
Merit: 6793
Just writing some code
January 01, 2023, 09:20:19 PM
#18
Therefore, one of the two winners' work (for rolling, whatever) is wasted.
What makes you say that?

Every single hash takes the same amount of energy. Every single hash has the same probability of being a valid proof of work. The vast majority of hashes are not, and so discarded. We do not consider those hashes to be wasted, so why would a hash that ends up being reorged out be considered wasted? It's no more or less wasteful than all of the hashes that weren't valid.

When there are chain splits, every single hash computed on the losing side of the split is not wasted. They could have been valid blocks, and could have made that side the winning side. Just because it ends up losing does not mean it is wasted.

Reorgs aren't bad because they waste energy. They're bad because they mean that low confirmation numbers cannot be relied upon.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
January 01, 2023, 04:39:18 PM
#17
The analogy with the dice is likely flawed, and I find it more complicated than needed. Mining works simply enough, over-simplifying might have opposite result.

Approximately 800 rolls after the new player joined the game, they get lucky and roll 4 sixes.  They get $10.  Are any of your rolls "wasted", just because someone else won their game?
No, they are not. But this isn't the issue I'm raising. What I'm saying is: If both of you roll the 4 dices at the same time, and win at the same time, nobody wins... yet. Someone will have to win afterwards, and use one of the previous winner as reference (prevBlockHash). Therefore, one of the two winners' work (for rolling, whatever) is wasted.

Let's get back to Bitcoin mining, because it's easier to formulate. If you drop the time interval to 1 minute, it's rational to assume that there will be more cases where two miners mine a block and broadcast it at the same time. This means it's more likely to have energy spent on blocks that are going to be orphaned, ergo invalid-- which is the definition of waste for Bitcoin.

I'm not saying that spending energy for finding that one hash is waste. I'm saying that spending energy on orphaned blocks is waste.
legendary
Activity: 3472
Merit: 4801
January 01, 2023, 03:23:22 PM
#16
    meaning that both miner B and miners from group B wasted their computational power.[/li][/list]

    No.

    This is a common misconception.  There is no "wasted" computational power from failure to beat some other miner (or pool) in proof-of-work.

    Here's an analogy that might perhaps help you to see why:

    Instead of generating hashes, we'll roll dice.  Instead of building blocks and paying ourselves with inflation and fees, we'll simply win a game prize. Instead of a hash target, we'll have winning dice configurations.

    Here's the game rules:
    • You are given 4 balanced, fair, six-sided dice to roll (this is your "hashing algorithm")
    • You get to roll those dice once every second (this is the "hashpower" of your dice rolling ability)
    • The operator of the game will pay you $10 every time you roll sixes on all 4 dice simultaneously (This is your target)
    • Statistically, this will happen once every 1296 rolls, or approximately every 21 minutes (this is your "average block time")

    To start with, you are playing alone. Let's say you get lucky and get all sixes after 1100 rolls of the dice. How many of the previous 1099 rolls were "wasted". Could you ever have gotten to the 1100th roll without first rolling those other 1099 times? You can't win the $10 without playing the game, and you can't play the game without rolling the dice a whole lot of times.

    Ok, you've one once. Let's say you continue to play the game and this time you're a bit unlucky. It takes you 1400 rolls of the dice to get all sixes this time.  How many of the previous 1399 rolls were "wasted"?  Again, the only way to get the 4 sixes is to roll the dice, and you're not going to get them on every roll. You HAVE to roll MANY times to get to the target. That's the way the game works. The losing rolls aren't "wasted", they are just part of the way the game (system) works. I could, perhaps, accept an argument that they are ALL wasted, since you put in effort and got nothing those times. That's an arguable point of view that the game itself shouldn't be played at all, but I can't see any argument that some of the losing rolls were "wasted, and some were not.

    Now, let's add a second player to the game.  Here's where people start to get lost when they think about this.  Someone else's win in this game has no effect on your ability to win.  There are now two of you playing. Each rolling theri own 4 dice every second.  When the other player gets 4 sixes, that person gets $10. When you get 4 sixes, you get $10.  Their win of $10 doesn't eliminate your ability to win $10.  It doesn't "reset" anything for you.  There is nobody "counting dice rolls" and making sure that you only get paid when you've rolled at least 1296 times since the last win that anyone had. There isn't even anyone counting rolls and making sure that you've rolled 1296 times since YOUR last win.  It's just 4 sixes pop up, you win.

    So, you're both rolling dice, playing your own games.  As soon as you get 4 sixes, you're gonna get $10. As soon as they get 4 sixes they're gonna get $10. Approximately 800 rolls after the new player joined the game, they get lucky and roll 4 sixes.  They get $10.  Are any of your rolls "wasted", just because someone else won their game? Aren't you still just participating in your own game? Don't you still need to be rolling the dice in order to get your 4 sixes?

    300 rolls later you get lucky and get 4 sixes. That's only 1100 rolls you've made since the game started with the new player. You got your prize 196 rolls earlier than the average for your game, just like the very first time you played the game.  So, how many of your rolls before your win of YOUR game were wasted?  The 800 that you rolled while you were waiting for the other player to win their game? The 299 losing rolls that you rolled in your game since the other player won their game?  The full 1099 losing rolls that you made since YOUR last win?  How are they "wasted" if they had to happen in order for you to get to your winning roll?

    Are you seeing yet that are two ways of looking at this:
    • There are NO WASTED rolls (hashes) that are part of the process.  That's just what you need to do to get to your winning roll (hash)
    • EVERY roll (hash) that doesn't win is a "waste", since you put in the effort to make that roll (hash) and that particular roll (hash) didn't pay anything

    In either case, the fact that someone else happened to have a winning roll (hash) has no bearing on which of your rolls (hashes) towards your prize (block) are wasted.  Either all of them are, or none of them are (depending on your point of view) regardless of what the other players in the game happen to be doing. Failure to get your 4 sixes before someone else gets their 4 sixes has no bearing on whether your rolls were wasted or not.
    staff
    Activity: 3458
    Merit: 6793
    Just writing some code
    December 31, 2022, 06:55:07 PM
    #15
    isn't there a way to avoid reorgs without incentivizing miners to be selfish? Couldn't there be a rule (that perhaps makes system more subjective in terms of consensus) which would make nodes reject those blocks, and decide objectively between 2 blocks (without waiting for the next one to be built on top)?
    No. Given the same chain, two blocks found for the same height in that chain must be treated equally. Any measure that allows one block to be chosen over the other just means that miners have different targets to meet in order to get their block in. Using the actual work means miners might start trying for lower hashes than the target requires. Using something like size means miners will stuff transactions with crap, or not include any transactions at all. Two blocks built on the same parent must be able to be treated equally (prior to any children) to avoid gaming the system.

    Isn't a soft fork enough? I'm thinking of it as limiting the protocol rules further. Not violating the old rules.
    No. Your idea would mean that it is possible to reorg to a less work chain (under the current rules), which would not be compatible with non-upgraded nodes.
    legendary
    Activity: 2268
    Merit: 18748
    December 31, 2022, 08:52:16 AM
    #14
    Couldn't there be a rule (that perhaps makes system more subjective in terms of consensus) which would make nodes reject those blocks, and decide objectively between 2 blocks (without waiting for the next one to be built on top)?
    I don't think so. Or at least, not without fundamentally changing what bitcoin is.

    At the moment, the split is resolved when one of the two competing blocks has more work built on top of it. That is the basis of proof of work. If you come up with some different mechanism to resolve the split, then it is no longer proof of work, but proof of something else. Further, what if your proposed mechanism resolved in the split in favor of Block A, but some miner running outdated code or different software or whatever accidentally built upon Block B before anyone else built upon Block A? Do we now ignore this chain with more work?
    copper member
    Activity: 821
    Merit: 1992
    December 31, 2022, 07:18:00 AM
    #13
    Quote
    I'm still thinking though: isn't there a way to avoid reorgs without incentivizing miners to be selfish?
    Even if there would be, then the question is: if you have a reorg-resistant chain, is it good or not? Because sometimes reorgs are needed, for example to fix bugs like Value Overflow Incident. Imagine that someone sends a transaction that is valid in the current consensus, but was never intended to be valid. What then? How to fix the chain without reorgs?

    Also note that there are some altcoins with some kind of "automatic lock-in" of the chain, for example after 10 confirmations it cannot be reorged. Then, they are not in the better situation, it is actually worse, because then they cannot fix things if something will happen unnoticed, and it will need fixing.

    Another thing is that chain reorganization is the only way to remove malicious data from the chain. How you will do that without reorgs when needed? Leave a hash, and force nodes to assume they don't need to know the content?

    Quote
    My thought was: low block intervals with zero cost.
    It should be handled by some second layers. If you are worried that miners produce "almost valid" blocks, or that anything is "wasted", then think about pooled mining, instead of solo mining, in pools those shares are not "wasted". And if you are worried about pool centralization, then think about decentralized pools, not about getting rid of them.

    Quote
    The cost of reorgs isn't high in bitcoin, because of 10 minutes block interval, but if you were to drop it to 1 minute interval, you'd notice lots of orphan blocks.
    It is only a matter of setting the difficulty for the second network. Currently, miners in pools submit their shares more often than every 10 minutes, so if they for example send it every 30 seconds, then their mining process is unaffected by the Bitcoin block time, as long as it is longer than their time. They could be affected only when trying to make it slower, and submit something for example every hour, then they have to check things every 10 minutes, to make sure they work on top of the latest block. Also, for that reason, the longer the block time, the more layers can be attached under that chain, where there is always one hash per block, to move it forward.

    Quote
    Isn't a soft fork enough? I'm thinking of it as limiting the protocol rules further. Not violating the old rules.
    It is similar to the first change, where we went from "the longest chain" to "the heaviest chain". As long as the network is small, it can be easily deployed. But changing it now is hard. As hard as fixing off-by-one error in difficulty calculation. You can run your rules in your local mining pool, and reward miners, according to their hashes, but that's all, you have to be compatible with the current system. Unless you break SHA-256 or things like that, because then you need two difficulties, chain rehashing, and then you can easily produce backward-compatible chain, and create any new restrictions for some new hash function on top of that.
    legendary
    Activity: 1512
    Merit: 7340
    Farewell, Leo
    December 31, 2022, 06:10:43 AM
    #12
    You're all correct. I'm still thinking though: isn't there a way to avoid reorgs without incentivizing miners to be selfish? Couldn't there be a rule (that perhaps makes system more subjective in terms of consensus) which would make nodes reject those blocks, and decide objectively between 2 blocks (without waiting for the next one to be built on top)?

    But why?
    My thought was: low block intervals with zero cost. The cost of reorgs isn't high in bitcoin, because of 10 minutes block interval, but if you were to drop it to 1 minute interval, you'd notice lots of orphan blocks.

    Doing this would require a hard fork for something that is not that frequent an occurrence.
    Isn't a soft fork enough? I'm thinking of it as limiting the protocol rules further. Not violating the old rules.
    legendary
    Activity: 2268
    Merit: 18748
    December 31, 2022, 05:52:03 AM
    #11
    So you're asking: what happens if Miner C mines block 700,001 and 700,002 whose total work would be more than Miner A's 700,001 and 700,002 combined? They'd be reversed. Quite unusual though to mine 700,001 when we're already in 700,002, unless you want to attack the network.
    It would not be a case of an attacker still working on 700,001 when the rest of the network has moved on to 700,002, but rather an attacker already mining 700,001 and keeping it secret.

    To build on the selfish mining idea in achow101's third paragraph above, imagine a miner finds block 700,001, and it has an exceptionally low hash, which would be enough to overcome two or even three blocks which have an "average" hash. This miner also includes, in every block they mine, a private transaction moving a large amount of bitcoin from one address they own to another address they own, which was never broadcast to the wider network. The miner decides not to broadcast their block 700,001, but instead keep it secret and instead broadcast a transaction which spends the aforementioned large amount of bitcoin with a merchant, exchange, whatever. They let that transaction confirm in block 700,001 on the main chain, get 2 or maybe even 3 confirmations, and then successfully reverse it by broadcasting their replacement block 700,001, which has a sufficiently low hash to reverse three existing blocks.

    There is nothing to stop every miner from including such a self transfer in every block they mine, and any time someone is lucky enough to find a very low hash they can keep their block secret and attempt a large double spend.
    staff
    Activity: 3458
    Merit: 6793
    Just writing some code
    December 30, 2022, 08:51:33 PM
    #10
    This idea isn't new and this same question has been asked by a lot of people over the years. Conceptually, it doesn't make sense. It would also incentivize selfish mining which could make reorgs more frequent, not less.

    Conceptually, your idea doesn't actually measure the amount of work that went into the chain. The chainwork is a measure of how many hashes, on average, were required to create the chain. Because it is tied to the target, each block using the same target adds the same amount of work to the chain, regardless of the actual value of the hash. If this were changed to be based on the actual hash, the chainwork no longer represents the actual work done. A miner that gets lucky and finds a hash that is less than the target did not actually perform the number of hashes that would be needed to find a hash of similar value. They did not perform those hashes, so it would be misleading to include the apparent work of that block. Of course a miner could still mine a block less than the target with fewer than the expected number of hashes. But because all of the blocks with the same target add the same amount of work, with a sufficiently large number of blocks, it all averages out. Conveniently, the difficulty period ensures that there will be a large number of blocks at a given target for that entire period to achieve that average. This means that over the entire blockchain, the chainwork calculated will be a close estimate to the actual number of hashes that were required to produce that chain. If it used the hash's apparent work, then the chainwork would be an overestimate of the amount of work that was done.

    This idea also further incentivizes selfish mining where a miner doesn't broadcast their block and instead extends it in private and waiting to broadcast it later. If a miner gets lucky and finds a block with a significantly lower hash, they could not broadcast the block and mine on it in private. Then they could broadcast it when the public chain's work gets close to the work of their private chain thereby reorging the chain and getting more rewards for themselves. This is easier to do when using the apparent work because a miner just needs to get lucky once, whereas with the actual chainwork system a miner needs to get lucky multiple times and produce multiple blocks faster than the rest of the network in order to perform such an attack. Depending on the thresholds miners choose for doing this kind of thing, it could actually make reorgs worse.
    copper member
    Activity: 821
    Merit: 1992
    December 30, 2022, 07:06:46 PM
    #9
    If you want to be convinced, which system is better, you can write some simple tests. For example, you can use a single SHA-256 on some numbers, to get sample hashes, just to have some representation of hashes submitted by miners.
    Code:
    SHA-256("0")=5feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9
    SHA-256("1")=6b86b273ff34fce19d6b804eff5a3f5747ada4eaa22f1d49c01e52ddb7875b4b
    Then, you can start from the simplest test: finding the strongest hash. You can run your code for a while, and see, what will happen, when you will accept the new hash only if it would be lower than the lowest one.
    Code:
      0 5feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9
      3 4e07408562bedb8b60ce05c1decfe3ad16b72230967de01f640b7e4729b49fce
      4 4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a
      8 2c624232cdd221771294dfbb310aca000a0df6ac8b66b696d90ef06fdefb64a3
      9 19581e27de7ced00ff1ce50b2047e7a567c76b1cbaebabe5ef03f7c3017bb5b7
     39 0b918943df0962bc7a1824c0555a389347b4febdc7cf9d1254406d80ce44e3f9
     51 031b4af5197ec30a926f48cf40e11a7dbc470048a21e4003b7a3c07c5dab1baa
     55 02d20bbd7e394ad5999a4cebabac9619732c343a4cac99470c03e23ba2bdc2bc
    178 01d54579da446ae1e75cda808cd188438834fa6249b151269db0f9123c9ddc61
    245 011af72a910ac4acf367eef9e6b761e0980842c30d4e9809840f4141d5163ede
    286 00328ce57bbc14b33bd6695bc8eb32cdf2fb5f3a7d89ec14a42825e15d39df60
    886 000f21ac06aceb9cdd0575e82d0d85fc39bed0a7a1d71970ba1641666a44f530
    You can run such code for some time if you want to estimate your hashrate. Then, you can use a division to calculate the chainwork.
    Code:
    max=ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
    chainwork(number)=SHA-256(number)/max
    chainwork("0")=SHA-256("0")/ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
    chainwork("0")=5feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9/ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
    chainwork("0")=2
    Then, you can calculate chainwork for each hash you need:
    Code:
      0 5feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9    2
      3 4e07408562bedb8b60ce05c1decfe3ad16b72230967de01f640b7e4729b49fce    3
      4 4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a    3
      8 2c624232cdd221771294dfbb310aca000a0df6ac8b66b696d90ef06fdefb64a3    5
      9 19581e27de7ced00ff1ce50b2047e7a567c76b1cbaebabe5ef03f7c3017bb5b7   10
     39 0b918943df0962bc7a1824c0555a389347b4febdc7cf9d1254406d80ce44e3f9   22
     51 031b4af5197ec30a926f48cf40e11a7dbc470048a21e4003b7a3c07c5dab1baa   82
     55 02d20bbd7e394ad5999a4cebabac9619732c343a4cac99470c03e23ba2bdc2bc   90
    178 01d54579da446ae1e75cda808cd188438834fa6249b151269db0f9123c9ddc61  139
    245 011af72a910ac4acf367eef9e6b761e0980842c30d4e9809840f4141d5163ede  231
    286 00328ce57bbc14b33bd6695bc8eb32cdf2fb5f3a7d89ec14a42825e15d39df60 1296
    886 000f21ac06aceb9cdd0575e82d0d85fc39bed0a7a1d71970ba1641666a44f530 4331
    Next, you need some way to create a history. You can for example use two hashes: one for the previous hash, and another for the hash of the current number you just computed. Then you will have any linked chain you want.
    Code:
    previous_hash=0000000000000000000000000000000000000000000000000000000000000000
    first_hash=SHA-256("0")=5feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9
    genesis_hash=SHA-256(previous_hash||first_hash)=SHA-256(00000000000000000000000000000000000000000000000000000000000000005feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9)=30283b94911be7ecec5c6cfb40b36018249d60d688c496235052fd47a670522a
    second_hash=SHA=256("1")=6b86b273ff34fce19d6b804eff5a3f5747ada4eaa22f1d49c01e52ddb7875b4b
    SHA=256(genesis_hash||second_hash)=308b33009d4aef160331f016f461513ac93bcd480bb55c79fc202fcf6a1aa549
    And then you can compute the total chainwork for any chain. For example, if you assume that all numbers are hashed and chained in order, you will get this:
    Code:
    0 30283b94911be7ecec5c6cfb40b36018249d60d688c496235052fd47a670522a  5
    1 308b33009d4aef160331f016f461513ac93bcd480bb55c79fc202fcf6a1aa549 10
    2 6aec805e8653a6839375608419efd881cc429805a8a7a0212efc90137548296d 12
    3 16c48be0dcce9070b93dc86d6d73d0b84a92319a2d57af72aebee109e64f27d1 23
    4 c447e5ca45050ae1217dec0ba4574ddc28c7350a8d5e32feb90d2df76f23fe50 24
    5 69b41d8016b2b5532e917b80754148c778407988a2356c1f37deef0c76cc127e 26
    6 5d9ecaaaf47fe9c1a700edba6c169cc714deaece922071b19c2c401a573c09e9 28
    7 acae5c0307380196010e2845d6a22cc45a4136c9ad5350abd6caa775cfed18f7 29
    8 c0ff419b7d623bfa5c5bd53ae232997cb956fd8f0487b7edf46fb554a001b272 30
    9 850f4cec71494934113cbdb9f5765b91d1b238cca767c3b98925cb56be4fabaa 31
    At this point, you can trigger any rules related to chain reorganization. If there is only one miner, then there is nothing to compare with. So, you can assume that there are N miners present in the network, and each miner generates its own hashes. First, let's assume that each miner has similar computing power. You can just generate numbers as before, and take them modulo N, then the remainder equal to zero means the first miner, the remainder equal to one is for the second miner, and so on.

    For two miners, it would mean the first miner will mine SHA-256("0"), and the second miner will mine SHA-256("1"), and each miner will try to extend the chain. First, you can assume that the previous block hash is zero, then calculate all hashes for all miners (here: two), and keep only the lowest hash. This is what would happen:
    Code:
    number=0, prev=0000000000000000000000000000000000000000000000000000000000000000, curr=30283b94911be7ecec5c6cfb40b36018249d60d688c496235052fd47a670522a
    number=1, prev=0000000000000000000000000000000000000000000000000000000000000000, curr=801b2d87516b57e17cd0cba517103bda889e8beba95fa22bd334816ae45b1771
    best=30283b94911be7ecec5c6cfb40b36018249d60d688c496235052fd47a670522a
    number=2, prev=30283b94911be7ecec5c6cfb40b36018249d60d688c496235052fd47a670522a, curr=9e1138503e2d5f372fcaea0281ced20fcb9ef0691416561a4a22923c0e78bfba
    number=3, prev=30283b94911be7ecec5c6cfb40b36018249d60d688c496235052fd47a670522a, curr=459fabbea886e1b92b975aa86de1e92d0369b4917134717823834fe6234b92fb
    best=459fabbea886e1b92b975aa86de1e92d0369b4917134717823834fe6234b92fb
    number=4, prev=459fabbea886e1b92b975aa86de1e92d0369b4917134717823834fe6234b92fb, curr=1fd79aaf345bf9dcfa9b2010b3a9c3dfeacaa7c0a197d855b099c1f1b2c2dcf9
    number=5, prev=459fabbea886e1b92b975aa86de1e92d0369b4917134717823834fe6234b92fb, curr=e74ebcd238f81648e6ed077b787994772ddc589b962d68cc4f58d3055393c07a
    best=1fd79aaf345bf9dcfa9b2010b3a9c3dfeacaa7c0a197d855b099c1f1b2c2dcf9
    number=6, prev=1fd79aaf345bf9dcfa9b2010b3a9c3dfeacaa7c0a197d855b099c1f1b2c2dcf9, curr=3011fee40b1fd3e7bdf3cba24357fa2c72ce4beac7d23ce3a860cf97277c9f4d
    number=7, prev=1fd79aaf345bf9dcfa9b2010b3a9c3dfeacaa7c0a197d855b099c1f1b2c2dcf9, curr=9ef582f8c255500bbb7cc9df6846678c4fdb73430ce9e8c2e8fb5dfdd00db211
    best=3011fee40b1fd3e7bdf3cba24357fa2c72ce4beac7d23ce3a860cf97277c9f4d
    number=8, prev=3011fee40b1fd3e7bdf3cba24357fa2c72ce4beac7d23ce3a860cf97277c9f4d, curr=9c49bce9d75cf932bbcb4025faadfeedd6e830733dfd87372524b30a2a3a8464
    number=9, prev=3011fee40b1fd3e7bdf3cba24357fa2c72ce4beac7d23ce3a860cf97277c9f4d, curr=f7a148c0ceda210ca8e9f9e31dbca1c3af36b2a64e6f6628a2d2150800e26401
    best=9c49bce9d75cf932bbcb4025faadfeedd6e830733dfd87372524b30a2a3a8464
    However, to complete the whole picture, you need more than that. You need to compute the total chainwork, because it is not only about picking the best hash from one of the N miners. So, you start from the zero hash, and the zero chainwork, and always pick the hash that results in the highest total chainwork after adding such block to the chain.
    Code:
    number=0, prev=0000000000000000000000000000000000000000000000000000000000000000, curr=30283b94911be7ecec5c6cfb40b36018249d60d688c496235052fd47a670522a, work=5
    number=1, prev=0000000000000000000000000000000000000000000000000000000000000000, curr=801b2d87516b57e17cd0cba517103bda889e8beba95fa22bd334816ae45b1771, work=1
    best=30283b94911be7ecec5c6cfb40b36018249d60d688c496235052fd47a670522a
    number=2, prev=30283b94911be7ecec5c6cfb40b36018249d60d688c496235052fd47a670522a, curr=9e1138503e2d5f372fcaea0281ced20fcb9ef0691416561a4a22923c0e78bfba, work=6
    number=3, prev=30283b94911be7ecec5c6cfb40b36018249d60d688c496235052fd47a670522a, curr=459fabbea886e1b92b975aa86de1e92d0369b4917134717823834fe6234b92fb, work=8
    best=459fabbea886e1b92b975aa86de1e92d0369b4917134717823834fe6234b92fb
    number=4, prev=459fabbea886e1b92b975aa86de1e92d0369b4917134717823834fe6234b92fb, curr=1fd79aaf345bf9dcfa9b2010b3a9c3dfeacaa7c0a197d855b099c1f1b2c2dcf9, work=16
    number=5, prev=459fabbea886e1b92b975aa86de1e92d0369b4917134717823834fe6234b92fb, curr=e74ebcd238f81648e6ed077b787994772ddc589b962d68cc4f58d3055393c07a, work=9
    best=1fd79aaf345bf9dcfa9b2010b3a9c3dfeacaa7c0a197d855b099c1f1b2c2dcf9
    number=6, prev=1fd79aaf345bf9dcfa9b2010b3a9c3dfeacaa7c0a197d855b099c1f1b2c2dcf9, curr=3011fee40b1fd3e7bdf3cba24357fa2c72ce4beac7d23ce3a860cf97277c9f4d, work=21
    number=7, prev=1fd79aaf345bf9dcfa9b2010b3a9c3dfeacaa7c0a197d855b099c1f1b2c2dcf9, curr=9ef582f8c255500bbb7cc9df6846678c4fdb73430ce9e8c2e8fb5dfdd00db211, work=17
    best=3011fee40b1fd3e7bdf3cba24357fa2c72ce4beac7d23ce3a860cf97277c9f4d
    number=8, prev=3011fee40b1fd3e7bdf3cba24357fa2c72ce4beac7d23ce3a860cf97277c9f4d, curr=9c49bce9d75cf932bbcb4025faadfeedd6e830733dfd87372524b30a2a3a8464, work=22
    number=9, prev=3011fee40b1fd3e7bdf3cba24357fa2c72ce4beac7d23ce3a860cf97277c9f4d, curr=f7a148c0ceda210ca8e9f9e31dbca1c3af36b2a64e6f6628a2d2150800e26401, work=22
    best=9c49bce9d75cf932bbcb4025faadfeedd6e830733dfd87372524b30a2a3a8464
    Here, things start to get interesting, because the total chainwork seems to be the same for both miners. In the current system, we would keep both branches, and wait for solving that by next blocks. However, in your proposal, the lower hash will be always picked in that case. In your version, it will be solved in this way:
    Code:
    number=10, prev=9c49bce9d75cf932bbcb4025faadfeedd6e830733dfd87372524b30a2a3a8464, curr=b8400b588f1712bf12752cb6935776c9cd439cafd28da485c0250bc6edb459a6, work=23
    number=11, prev=9c49bce9d75cf932bbcb4025faadfeedd6e830733dfd87372524b30a2a3a8464, curr=ef50a88c600369eb17a50ebeb8b6939a87cf377afb8834cbf292ce74309c8a53, work=23
    best=b8400b588f1712bf12752cb6935776c9cd439cafd28da485c0250bc6edb459a6
    number=12, prev=b8400b588f1712bf12752cb6935776c9cd439cafd28da485c0250bc6edb459a6, curr=08df3ff63192c80b39ba86615671fdfadfbd62f0640e794093b0c6f81733b95e, work=51
    number=13, prev=b8400b588f1712bf12752cb6935776c9cd439cafd28da485c0250bc6edb459a6, curr=d985028016122021a0910860ce796e18fbde4456bcb38d6901aee907fb2f64b5, work=24
    best=08df3ff63192c80b39ba86615671fdfadfbd62f0640e794093b0c6f81733b95e
    number=14, prev=08df3ff63192c80b39ba86615671fdfadfbd62f0640e794093b0c6f81733b95e, curr=19fc74c1b685ea3fd734df3d7c3b33d166cefbaa96e22ecc20e592c81563cb73, work=60
    number=15, prev=08df3ff63192c80b39ba86615671fdfadfbd62f0640e794093b0c6f81733b95e, curr=7a99736b1f9f82bd5850f65b27959e970ca08d95e60859c5e8c23223400c4172, work=53
    best=19fc74c1b685ea3fd734df3d7c3b33d166cefbaa96e22ecc20e592c81563cb73
    After writing more code, we can see, what would happen on the alternative branch:
    Code:
    number=10, prev=f7a148c0ceda210ca8e9f9e31dbca1c3af36b2a64e6f6628a2d2150800e26401, curr=fe4732f0fa66063234d1a5b921fd352bf3bcefc579eb94e346b9b5f1543a507b, work=23
    number=11, prev=f7a148c0ceda210ca8e9f9e31dbca1c3af36b2a64e6f6628a2d2150800e26401, curr=d483e6062e644bff9be1617cb3d7578e53de71cbf6b655a0768a7c4d88d9c256, work=23
    best=fe4732f0fa66063234d1a5b921fd352bf3bcefc579eb94e346b9b5f1543a507b
    number=12, prev=fe4732f0fa66063234d1a5b921fd352bf3bcefc579eb94e346b9b5f1543a507b, curr=9365d5dff92ba341e548a4e7647e4f42ae3d0a235ac8ba2f8c0854b774ddc61a, work=24
    number=13, prev=fe4732f0fa66063234d1a5b921fd352bf3bcefc579eb94e346b9b5f1543a507b, curr=7d553d15194fbf46d6fc5671177006e6c2d72eaa8ce1693ffebaee420b756277, work=25
    best=7d553d15194fbf46d6fc5671177006e6c2d72eaa8ce1693ffebaee420b756277
    number=14, prev=7d553d15194fbf46d6fc5671177006e6c2d72eaa8ce1693ffebaee420b756277, curr=60522e578dcbe7d547c6233b03c6be75d75a200915124f025e829a616f080576, work=27
    number=15, prev=7d553d15194fbf46d6fc5671177006e6c2d72eaa8ce1693ffebaee420b756277, curr=044dd08a991a0d2530d7687fbf5f9ab192a955ebe0432e08f666d95e4c50a4ac, work=84
    best=044dd08a991a0d2530d7687fbf5f9ab192a955ebe0432e08f666d95e4c50a4ac
    As you can see, after mining 16 blocks, you have chainwork equal to 60 on one branch, and 84 on another branch. After testing longer chains and different scenarios, where there would be some attacker, trying to endlessly mine the Genesis Block (or the earliest possible block after that), you would notice more unexpectedly low hashes. Because at first, it seems that picking the lowest hash is an obvious solution. But when you write more tests, then you will notice more attacks, and there is no reason to introduce them to the current system.

    For example, even here, on some smaller numbers, you can see that it is possible to go from 25 to 84 chainwork. Every sometimes, some miner may hit such block, that will be the lowest block hash ever found. Statistically, it is inevitable, that if many miners are trying, then one of them may hit it, just because of how many hashes were checked by all miners.

    Also, it is a good moment to open the whitepaper again, and read chapter 11 named "calculations". Because then, you can compare your attacking scenarios with those probabilities described in the whitepaper, and see, if the probability matches with reorgs you will see during your testing.
    hero member
    Activity: 882
    Merit: 5834
    not your keys, not your coins!
    December 30, 2022, 06:23:19 PM
    #8
    the system works the way it is
    Probably mostly theoretical question; I'd agree, it's not something necessary right now or in the foreseeable future.

    But be aware, btc has done no real improvements to its onchain network for years, doubtful any would get thru now.
    Please don't derail the topic; look at https://github.com/bitcoin/bitcoin/blob/master/doc/bips.md and complain somewhere else.
    member
    Activity: 280
    Merit: 30
    December 30, 2022, 03:58:36 PM
    #7
    Only 1 coin network has no reorgs.
    That coin is algorand , 4 second block speed and transaction finality.
    You might want to study how they did it and if that design can be ported over.

    FYI:  https://www.algorand.com/technology/immediate-transaction-finality
    Quote
    the Algorand blockchain never forks.
    Two blocks can never be added to the chain at once because only one block can have the required threshold of committee votes.
    At most, one block is certified and written to the chain in a given round.
    Accordingly, all transactions are final in Algorand.
    When the consensus protocol decides on a block, this decision is never changed.


    But be aware, btc has done no real improvements to its onchain network for years,
    doubtful any would get thru now.

     
    legendary
    Activity: 3500
    Merit: 6320
    Crypto Swap Exchange
    December 30, 2022, 02:36:19 PM
    #6
    But why? Unless I am missing something the system works the way it is. Doing this would require a hard fork for something that is not that frequent an occurrence.
    Having an occasional reorg / orphan / lost block is just part of mining.

    -Dave
    legendary
    Activity: 1512
    Merit: 7340
    Farewell, Leo
    December 30, 2022, 12:55:23 PM
    #5
    The initial question in that post also proposes a similar solution like you; theymos says about that:
    Resolving orphans as you suggest might actually make orphans more likely because miners would be incentivized in some cases to try replacing the most recent block rather than extending it, especially right before a big difficulty adjustment or when the most recent block contains a lot of fees.
    So the counter argument is: if the block hash of the last block is too close to the target, the miners become incentivized to reverse it (since it's easier), especially if it contains a lot of fees. But is it true, though?

    The current system works like this: to reverse a block, find two hashes below the target. My proposed system works like this: to reverse a block, find one hash below the previous hash (which is below the target); unless of course it's difficulty adjustment between the blocks. In that case, the new block hash should be greater than just the target.

    Indeed, good point. Reversing the previous block requires mining one more block in the current system.

    Just to be sure because I might this wrong, but in this case what will happen if
    - Miners A and B mine block 700,002 (one of them does)
    - Miner C mines his own blocks 700,001 and 02 with 01 having a lower hash than the previous block mined by miner B?
    So you're asking: what happens if Miner C mines block 700,001 and 700,002 whose total work would be more than Miner A's 700,001 and 700,002 combined? They'd be reversed. Quite unusual though to mine 700,001 when we're already in 700,002, unless you want to attack the network.

    So, in theory, you can calculate chainwork in a different way. But in practice, some miners will hit some lower blocks by accident, and then they will have an unfair advantage, because it can turn out, that their blocks will be stronger than N blocks in a row.
    Yes. Good point.
    copper member
    Activity: 821
    Merit: 1992
    December 30, 2022, 12:42:27 PM
    #4
    Quote
    Now my question is: isn't a block hash based chainwork more accurate than a block amount based one?
    No, it is not, I also thought about solving it in that way, and my conclusion is that the current chainwork is better. For example, you can look at the lowest block hashes in the history, for example block 634842 with hash 000000000000000000000003681c2df35533c9578fb6aace040b0dfe0d446413. Sometimes, a miner can accidentally find a very low block hash, lower than expected. Then, in the current system, such miner will have no advantage over other participants. However, in your system, it will suddenly gain a lot of chainwork, sometimes enough to reorganize many blocks in a row.

    Also, that last thing seems to be the most dangerous. For example, the total chainwork could be based only on block hashes. What then? It turns out that in such case, it is possible to mine the first block after the Genesis Block, and overwrite a lot of early blocks in that way. And in this way, you don't have to start from the lowest difficulty, and go through all difficulty adjustments, to build some stronger chain. All you need is constantly trying to mine a single block, and betting that it will be stronger than other blocks.

    So, in theory, you can calculate chainwork in a different way. But in practice, some miners will hit some lower blocks by accident, and then they will have an unfair advantage, because it can turn out, that their blocks will be stronger than N blocks in a row.
    legendary
    Activity: 2912
    Merit: 6403
    Blackjack.fun
    December 30, 2022, 12:21:06 PM
    #3
    Example:
    • Assume we're at block height 700,000.
    • Miner A mines block 700,001: 0x00000000000000000002f39baabb00ffeb47dbdb425d5077baa62c47482b7e92
    • Miner B mines block 700,001: 0x000000000000000000010e76862af418f16ddb538f6f03ef7a7052b751d79b829 (hash is lower than A's, and therefore B has probably worked more)
    • Both miners broadcast their success at the same time.
    • All nodes will discard Miner A's block, because Miner B's block's hash is lower than A's.
    • Everyone works on top of Miner B's block.

    Doesn't that drop the cost of reorgs to 0?

    Just to be sure because I might this wrong, but in this case what will happen if
    - Miners A and B mine block 700,002 (one of them does)
    - Miner C mines his own blocks 700,001 and 02 with 01 having a lower hash than the previous block mined by miner B?

    hero member
    Activity: 882
    Merit: 5834
    not your keys, not your coins!
    December 30, 2022, 12:12:12 PM
    #2
    There has actually been a few questions about this on Stackexchange; most notably, an answer from our very own theymos! Cheesy

    The "longest" chain is the one with the most work. A chain's work is equal to the expected number of hashes it would take for someone to replicate a chain of the same number of blocks and the exact same difficulty steps. So currently each block adds about 266 work to the chain because it takes on average ~266 hashes to solve a block with the current difficulty. Blocks with less difficulty add less work. (The current total chain work is around 280.) However, two blocks in the same difficulty period always add the same amount of work to the chain. A block with a lower hash is not considered better than one with a higher hash.

    The initial question in that post also proposes a similar solution like you; theymos says about that:
    Resolving orphans as you suggest might actually make orphans more likely because miners would be incentivized in some cases to try replacing the most recent block rather than extending it, especially right before a big difficulty adjustment or when the most recent block contains a lot of fees.

    There's some discussion about it and myself, I'm not sure whether it would make sense to start handling reorgs like that.
    Personally, I mostly resonate with this statement by theymos; it's not a big issue for Bitcoin in the first place so I wouldn't worry too much about it.
    In any case, orphans aren't much of a problem for the network, so there's no need to change things. Miners don't like orphans because it causes them to lose blocks or waste work, but making miners happy is not important
    legendary
    Activity: 1512
    Merit: 7340
    Farewell, Leo
    December 30, 2022, 09:12:26 AM
    #1
    This is my understanding of reorgs. Please correct me if I'm somewhere wrong.

    • Assume we're at block height 700,000.
    • Miner A mines block 700,001.
    • Miner B mines block 700,001.
    • Both miners broadcast their success at the same time.
    • Some nodes receive coinbase transaction A, and some coinbase transaction B (let's call one group A, and the other B).
    • Miners from group A mine on top of coinbase transaction A, and miners from group B mine on top of coinbase transaction B.
    • Group A mines the candidate block successfully, it adds it on top of 700,001 and shares it with everyone.
    • Now block height is 700,002, and the coinbase transaction from block 700,001 is A, meaning that both miner B and miners from group B wasted their computational power.

    The correct chain is the one with the most work. The group's A chain is correct, because their chainwork is greater. As far as I can tell, chainwork is calculated in amounts of blocks. If difficulty is 1, and you add a block on top of the chain, 0x100010001 of chainwork is added.

    Now my question is: isn't a block hash based chainwork more accurate than a block amount based one? If, instead of difficulty, we used block hash as a meter for chainwork, there would be no reorgs, because either miner A or miner B would find a hash with more work than the other.

    Example:
    • Assume we're at block height 700,000.
    • Miner A mines block 700,001: 0x00000000000000000002f39baabb00ffeb47dbdb425d5077baa62c47482b7e92
    • Miner B mines block 700,001: 0x000000000000000000010e76862af418f16ddb538f6f03ef7a7052b751d79b829 (hash is lower than A's, and therefore B has probably worked more)
    • Both miners broadcast their success at the same time.
    • All nodes will discard Miner A's block, because Miner B's block's hash is lower than A's.
    • Everyone works on top of Miner B's block.

    Doesn't that drop the cost of reorgs to 0?
    Jump to: